00:00:00.001 Started by upstream project "autotest-per-patch" build number 126122 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.118 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.119 The recommended git tool is: git 00:00:00.119 using credential 00000000-0000-0000-0000-000000000002 00:00:00.121 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.161 Fetching changes from the remote Git repository 00:00:00.168 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.198 Using shallow fetch with depth 1 00:00:00.198 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.198 > git --version # timeout=10 00:00:00.225 > git --version # 'git version 2.39.2' 00:00:00.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.248 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.248 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.016 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.028 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.039 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:06.039 > git config core.sparsecheckout # timeout=10 00:00:06.050 > git read-tree -mu HEAD # timeout=10 00:00:06.066 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:06.083 Commit message: "inventory: add WCP3 to free inventory" 00:00:06.083 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:06.204 [Pipeline] Start of Pipeline 00:00:06.218 [Pipeline] library 00:00:06.219 Loading library shm_lib@master 00:00:06.219 Library shm_lib@master is cached. Copying from home. 00:00:06.231 [Pipeline] node 00:00:11.279 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:11.281 [Pipeline] { 00:00:11.292 [Pipeline] catchError 00:00:11.293 [Pipeline] { 00:00:11.307 [Pipeline] wrap 00:00:11.321 [Pipeline] { 00:00:11.332 [Pipeline] stage 00:00:11.334 [Pipeline] { (Prologue) 00:00:11.357 [Pipeline] echo 00:00:11.359 Node: VM-host-SM16 00:00:11.367 [Pipeline] cleanWs 00:00:11.375 [WS-CLEANUP] Deleting project workspace... 00:00:11.376 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.382 [WS-CLEANUP] done 00:00:11.561 [Pipeline] setCustomBuildProperty 00:00:11.641 [Pipeline] httpRequest 00:00:11.668 [Pipeline] echo 00:00:11.670 Sorcerer 10.211.164.101 is alive 00:00:11.679 [Pipeline] httpRequest 00:00:11.684 HttpMethod: GET 00:00:11.684 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:11.685 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:11.707 Response Code: HTTP/1.1 200 OK 00:00:11.707 Success: Status code 200 is in the accepted range: 200,404 00:00:11.708 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:31.330 [Pipeline] sh 00:00:31.656 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:31.674 [Pipeline] httpRequest 00:00:31.695 [Pipeline] echo 00:00:31.697 Sorcerer 10.211.164.101 is alive 00:00:31.707 [Pipeline] httpRequest 00:00:31.712 HttpMethod: GET 00:00:31.712 URL: http://10.211.164.101/packages/spdk_07d3b03c8ed08e4f19092b44a36ecaa0d34310cc.tar.gz 00:00:31.713 Sending request to url: http://10.211.164.101/packages/spdk_07d3b03c8ed08e4f19092b44a36ecaa0d34310cc.tar.gz 00:00:31.714 Response Code: HTTP/1.1 200 OK 00:00:31.715 Success: Status code 200 is in the accepted range: 200,404 00:00:31.715 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_07d3b03c8ed08e4f19092b44a36ecaa0d34310cc.tar.gz 00:00:45.583 [Pipeline] sh 00:00:45.860 + tar --no-same-owner -xf spdk_07d3b03c8ed08e4f19092b44a36ecaa0d34310cc.tar.gz 00:00:49.144 [Pipeline] sh 00:00:49.420 + git -C spdk log --oneline -n5 00:00:49.420 07d3b03c8 test/accel: parametrize accel tests for DSA kernel mode 00:00:49.420 192cfc373 test/common/autotest_common: managing idxd drivers setup 00:00:49.420 e118fc0cd test/setup: add configuration script for dsa devices 00:00:49.420 719d03c6a sock/uring: only register net impl if supported 00:00:49.420 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:49.440 [Pipeline] writeFile 00:00:49.456 [Pipeline] sh 00:00:49.734 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:49.747 [Pipeline] sh 00:00:50.026 + cat autorun-spdk.conf 00:00:50.026 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.026 SPDK_TEST_NVMF=1 00:00:50.026 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.026 SPDK_TEST_URING=1 00:00:50.026 SPDK_TEST_USDT=1 00:00:50.026 SPDK_RUN_UBSAN=1 00:00:50.026 NET_TYPE=virt 00:00:50.026 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:50.032 RUN_NIGHTLY=0 00:00:50.034 [Pipeline] } 00:00:50.050 [Pipeline] // stage 00:00:50.065 [Pipeline] stage 00:00:50.067 [Pipeline] { (Run VM) 00:00:50.081 [Pipeline] sh 00:00:50.359 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:50.359 + echo 'Start stage prepare_nvme.sh' 00:00:50.359 Start stage prepare_nvme.sh 00:00:50.359 + [[ -n 7 ]] 00:00:50.359 + disk_prefix=ex7 00:00:50.359 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:50.359 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:50.359 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:50.359 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.359 ++ SPDK_TEST_NVMF=1 00:00:50.359 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.359 ++ SPDK_TEST_URING=1 00:00:50.359 ++ SPDK_TEST_USDT=1 00:00:50.359 ++ SPDK_RUN_UBSAN=1 00:00:50.359 ++ NET_TYPE=virt 00:00:50.359 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:50.359 ++ RUN_NIGHTLY=0 00:00:50.359 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:50.359 + nvme_files=() 00:00:50.359 + declare -A nvme_files 00:00:50.359 + backend_dir=/var/lib/libvirt/images/backends 00:00:50.359 + nvme_files['nvme.img']=5G 00:00:50.359 + nvme_files['nvme-cmb.img']=5G 00:00:50.359 + nvme_files['nvme-multi0.img']=4G 00:00:50.359 + nvme_files['nvme-multi1.img']=4G 00:00:50.359 + nvme_files['nvme-multi2.img']=4G 00:00:50.359 + nvme_files['nvme-openstack.img']=8G 00:00:50.359 + nvme_files['nvme-zns.img']=5G 00:00:50.359 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:50.359 + (( SPDK_TEST_FTL == 1 )) 00:00:50.359 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:50.359 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:50.359 + for nvme in "${!nvme_files[@]}" 00:00:50.359 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:50.359 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:50.359 + for nvme in "${!nvme_files[@]}" 00:00:50.359 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:50.359 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:50.359 + for nvme in "${!nvme_files[@]}" 00:00:50.359 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:50.359 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:50.359 + for nvme in "${!nvme_files[@]}" 00:00:50.359 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:50.359 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:50.359 + for nvme in "${!nvme_files[@]}" 00:00:50.359 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:50.359 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:50.359 + for nvme in "${!nvme_files[@]}" 00:00:50.359 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:50.359 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:50.359 + for nvme in "${!nvme_files[@]}" 00:00:50.359 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:50.359 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:50.359 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:50.359 + echo 'End stage prepare_nvme.sh' 00:00:50.359 End stage prepare_nvme.sh 00:00:50.372 [Pipeline] sh 00:00:50.653 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:50.653 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora38 00:00:50.653 00:00:50.653 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:50.653 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:50.653 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:50.653 HELP=0 00:00:50.653 DRY_RUN=0 00:00:50.653 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:00:50.653 NVME_DISKS_TYPE=nvme,nvme, 00:00:50.653 NVME_AUTO_CREATE=0 00:00:50.653 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:00:50.653 NVME_CMB=,, 00:00:50.653 NVME_PMR=,, 00:00:50.653 NVME_ZNS=,, 00:00:50.653 NVME_MS=,, 00:00:50.653 NVME_FDP=,, 00:00:50.653 SPDK_VAGRANT_DISTRO=fedora38 00:00:50.653 SPDK_VAGRANT_VMCPU=10 00:00:50.653 SPDK_VAGRANT_VMRAM=12288 00:00:50.653 SPDK_VAGRANT_PROVIDER=libvirt 00:00:50.653 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:50.653 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:50.653 SPDK_OPENSTACK_NETWORK=0 00:00:50.653 VAGRANT_PACKAGE_BOX=0 00:00:50.653 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:50.653 FORCE_DISTRO=true 00:00:50.653 VAGRANT_BOX_VERSION= 00:00:50.653 EXTRA_VAGRANTFILES= 00:00:50.653 NIC_MODEL=e1000 00:00:50.653 00:00:50.653 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:00:50.653 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:53.972 Bringing machine 'default' up with 'libvirt' provider... 00:00:54.919 ==> default: Creating image (snapshot of base box volume). 00:00:54.920 ==> default: Creating domain with the following settings... 00:00:54.920 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720787120_f949d3d2210de886e6f5 00:00:54.920 ==> default: -- Domain type: kvm 00:00:54.920 ==> default: -- Cpus: 10 00:00:54.920 ==> default: -- Feature: acpi 00:00:54.920 ==> default: -- Feature: apic 00:00:54.920 ==> default: -- Feature: pae 00:00:54.920 ==> default: -- Memory: 12288M 00:00:54.920 ==> default: -- Memory Backing: hugepages: 00:00:54.920 ==> default: -- Management MAC: 00:00:54.920 ==> default: -- Loader: 00:00:54.920 ==> default: -- Nvram: 00:00:54.920 ==> default: -- Base box: spdk/fedora38 00:00:54.920 ==> default: -- Storage pool: default 00:00:54.920 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720787120_f949d3d2210de886e6f5.img (20G) 00:00:54.920 ==> default: -- Volume Cache: default 00:00:54.920 ==> default: -- Kernel: 00:00:54.920 ==> default: -- Initrd: 00:00:54.920 ==> default: -- Graphics Type: vnc 00:00:54.920 ==> default: -- Graphics Port: -1 00:00:54.920 ==> default: -- Graphics IP: 127.0.0.1 00:00:54.920 ==> default: -- Graphics Password: Not defined 00:00:54.920 ==> default: -- Video Type: cirrus 00:00:54.920 ==> default: -- Video VRAM: 9216 00:00:54.920 ==> default: -- Sound Type: 00:00:54.920 ==> default: -- Keymap: en-us 00:00:54.920 ==> default: -- TPM Path: 00:00:54.920 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:54.920 ==> default: -- Command line args: 00:00:54.920 ==> default: -> value=-device, 00:00:54.920 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:54.920 ==> default: -> value=-drive, 00:00:54.920 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:00:54.920 ==> default: -> value=-device, 00:00:54.920 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.920 ==> default: -> value=-device, 00:00:54.920 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:54.920 ==> default: -> value=-drive, 00:00:54.920 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:54.920 ==> default: -> value=-device, 00:00:54.920 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.920 ==> default: -> value=-drive, 00:00:54.920 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:54.920 ==> default: -> value=-device, 00:00:54.920 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.920 ==> default: -> value=-drive, 00:00:54.920 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:54.920 ==> default: -> value=-device, 00:00:54.920 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:54.920 ==> default: Creating shared folders metadata... 00:00:54.920 ==> default: Starting domain. 00:00:56.825 ==> default: Waiting for domain to get an IP address... 00:01:14.904 ==> default: Waiting for SSH to become available... 00:01:16.279 ==> default: Configuring and enabling network interfaces... 00:01:21.539 default: SSH address: 192.168.121.250:22 00:01:21.539 default: SSH username: vagrant 00:01:21.539 default: SSH auth method: private key 00:01:22.914 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:31.054 ==> default: Mounting SSHFS shared folder... 00:01:32.428 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:32.428 ==> default: Checking Mount.. 00:01:33.803 ==> default: Folder Successfully Mounted! 00:01:33.803 ==> default: Running provisioner: file... 00:01:34.739 default: ~/.gitconfig => .gitconfig 00:01:34.997 00:01:34.997 SUCCESS! 00:01:34.997 00:01:34.997 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:34.997 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:34.997 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:34.997 00:01:35.004 [Pipeline] } 00:01:35.018 [Pipeline] // stage 00:01:35.027 [Pipeline] dir 00:01:35.027 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:01:35.029 [Pipeline] { 00:01:35.040 [Pipeline] catchError 00:01:35.041 [Pipeline] { 00:01:35.053 [Pipeline] sh 00:01:35.329 + vagrant ssh-config --host vagrant 00:01:35.329 + sed -ne /^Host/,$p 00:01:35.329 + tee ssh_conf 00:01:39.590 Host vagrant 00:01:39.590 HostName 192.168.121.250 00:01:39.590 User vagrant 00:01:39.590 Port 22 00:01:39.590 UserKnownHostsFile /dev/null 00:01:39.590 StrictHostKeyChecking no 00:01:39.590 PasswordAuthentication no 00:01:39.590 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:39.590 IdentitiesOnly yes 00:01:39.590 LogLevel FATAL 00:01:39.590 ForwardAgent yes 00:01:39.590 ForwardX11 yes 00:01:39.590 00:01:39.604 [Pipeline] withEnv 00:01:39.607 [Pipeline] { 00:01:39.623 [Pipeline] sh 00:01:39.934 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:39.935 source /etc/os-release 00:01:39.935 [[ -e /image.version ]] && img=$(< /image.version) 00:01:39.935 # Minimal, systemd-like check. 00:01:39.935 if [[ -e /.dockerenv ]]; then 00:01:39.935 # Clear garbage from the node's name: 00:01:39.935 # agt-er_autotest_547-896 -> autotest_547-896 00:01:39.935 # $HOSTNAME is the actual container id 00:01:39.935 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:39.935 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:39.935 # We can assume this is a mount from a host where container is running, 00:01:39.935 # so fetch its hostname to easily identify the target swarm worker. 00:01:39.935 container="$(< /etc/hostname) ($agent)" 00:01:39.935 else 00:01:39.935 # Fallback 00:01:39.935 container=$agent 00:01:39.935 fi 00:01:39.935 fi 00:01:39.935 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:39.935 00:01:39.946 [Pipeline] } 00:01:39.966 [Pipeline] // withEnv 00:01:39.975 [Pipeline] setCustomBuildProperty 00:01:39.992 [Pipeline] stage 00:01:39.994 [Pipeline] { (Tests) 00:01:40.015 [Pipeline] sh 00:01:40.293 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:40.564 [Pipeline] sh 00:01:40.841 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:40.859 [Pipeline] timeout 00:01:40.859 Timeout set to expire in 30 min 00:01:40.861 [Pipeline] { 00:01:40.878 [Pipeline] sh 00:01:41.159 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:41.726 HEAD is now at 07d3b03c8 test/accel: parametrize accel tests for DSA kernel mode 00:01:41.741 [Pipeline] sh 00:01:42.021 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:42.292 [Pipeline] sh 00:01:42.607 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:42.625 [Pipeline] sh 00:01:42.903 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:43.161 ++ readlink -f spdk_repo 00:01:43.161 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:43.161 + [[ -n /home/vagrant/spdk_repo ]] 00:01:43.161 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:43.161 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:43.161 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:43.161 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:43.161 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:43.161 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:43.161 + cd /home/vagrant/spdk_repo 00:01:43.161 + source /etc/os-release 00:01:43.161 ++ NAME='Fedora Linux' 00:01:43.161 ++ VERSION='38 (Cloud Edition)' 00:01:43.161 ++ ID=fedora 00:01:43.161 ++ VERSION_ID=38 00:01:43.161 ++ VERSION_CODENAME= 00:01:43.161 ++ PLATFORM_ID=platform:f38 00:01:43.161 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:43.161 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:43.161 ++ LOGO=fedora-logo-icon 00:01:43.161 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:43.161 ++ HOME_URL=https://fedoraproject.org/ 00:01:43.161 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:43.161 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:43.161 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:43.161 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:43.161 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:43.161 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:43.161 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:43.161 ++ SUPPORT_END=2024-05-14 00:01:43.161 ++ VARIANT='Cloud Edition' 00:01:43.161 ++ VARIANT_ID=cloud 00:01:43.161 + uname -a 00:01:43.161 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:43.161 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:43.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:43.419 Hugepages 00:01:43.419 node hugesize free / total 00:01:43.419 node0 1048576kB 0 / 0 00:01:43.678 node0 2048kB 0 / 0 00:01:43.678 00:01:43.678 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:43.678 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:43.678 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:43.678 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:43.678 + rm -f /tmp/spdk-ld-path 00:01:43.678 + source autorun-spdk.conf 00:01:43.678 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.678 ++ SPDK_TEST_NVMF=1 00:01:43.678 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.678 ++ SPDK_TEST_URING=1 00:01:43.678 ++ SPDK_TEST_USDT=1 00:01:43.678 ++ SPDK_RUN_UBSAN=1 00:01:43.678 ++ NET_TYPE=virt 00:01:43.678 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:43.678 ++ RUN_NIGHTLY=0 00:01:43.678 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:43.678 + [[ -n '' ]] 00:01:43.678 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:43.678 + for M in /var/spdk/build-*-manifest.txt 00:01:43.678 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:43.678 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:43.678 + for M in /var/spdk/build-*-manifest.txt 00:01:43.678 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:43.678 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:43.678 ++ uname 00:01:43.678 + [[ Linux == \L\i\n\u\x ]] 00:01:43.678 + sudo dmesg -T 00:01:43.678 + sudo dmesg --clear 00:01:43.678 + dmesg_pid=5277 00:01:43.678 + sudo dmesg -Tw 00:01:43.678 + [[ Fedora Linux == FreeBSD ]] 00:01:43.678 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.678 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.678 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:43.678 + [[ -x /usr/src/fio-static/fio ]] 00:01:43.678 + export FIO_BIN=/usr/src/fio-static/fio 00:01:43.678 + FIO_BIN=/usr/src/fio-static/fio 00:01:43.678 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:43.678 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:43.678 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:43.678 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.678 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.678 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:43.678 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.678 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.678 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:43.678 Test configuration: 00:01:43.678 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.678 SPDK_TEST_NVMF=1 00:01:43.678 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.678 SPDK_TEST_URING=1 00:01:43.678 SPDK_TEST_USDT=1 00:01:43.678 SPDK_RUN_UBSAN=1 00:01:43.678 NET_TYPE=virt 00:01:43.678 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:43.976 RUN_NIGHTLY=0 12:26:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:43.976 12:26:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:43.976 12:26:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:43.976 12:26:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:43.976 12:26:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.976 12:26:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.976 12:26:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.976 12:26:09 -- paths/export.sh@5 -- $ export PATH 00:01:43.976 12:26:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.976 12:26:09 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:43.976 12:26:09 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:43.976 12:26:09 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720787169.XXXXXX 00:01:43.976 12:26:09 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720787169.lXijeW 00:01:43.976 12:26:09 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:43.976 12:26:09 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:43.976 12:26:09 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:43.976 12:26:09 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:43.976 12:26:09 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:43.976 12:26:09 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:43.976 12:26:09 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:43.976 12:26:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.976 12:26:09 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:43.976 12:26:09 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:43.976 12:26:09 -- pm/common@17 -- $ local monitor 00:01:43.976 12:26:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.976 12:26:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:43.976 12:26:09 -- pm/common@25 -- $ sleep 1 00:01:43.976 12:26:09 -- pm/common@21 -- $ date +%s 00:01:43.976 12:26:09 -- pm/common@21 -- $ date +%s 00:01:43.976 12:26:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720787169 00:01:43.976 12:26:09 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720787169 00:01:43.976 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720787169_collect-vmstat.pm.log 00:01:43.976 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720787169_collect-cpu-load.pm.log 00:01:44.909 12:26:10 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:44.909 12:26:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:44.909 12:26:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:44.909 12:26:10 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:44.909 12:26:10 -- spdk/autobuild.sh@16 -- $ date -u 00:01:44.909 Fri Jul 12 12:26:10 PM UTC 2024 00:01:44.909 12:26:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:44.909 v24.09-pre-205-g07d3b03c8 00:01:44.909 12:26:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:44.909 12:26:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:44.909 12:26:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:44.909 12:26:10 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:44.909 12:26:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:44.909 12:26:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.909 ************************************ 00:01:44.909 START TEST ubsan 00:01:44.909 ************************************ 00:01:44.909 using ubsan 00:01:44.909 12:26:10 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:44.909 00:01:44.909 real 0m0.000s 00:01:44.909 user 0m0.000s 00:01:44.909 sys 0m0.000s 00:01:44.909 12:26:10 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:44.909 12:26:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:44.909 ************************************ 00:01:44.909 END TEST ubsan 00:01:44.909 ************************************ 00:01:44.909 12:26:10 -- common/autotest_common.sh@1142 -- $ return 0 00:01:44.909 12:26:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:44.909 12:26:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:44.909 12:26:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:44.909 12:26:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:44.909 12:26:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:44.909 12:26:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:44.909 12:26:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:44.909 12:26:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:44.909 12:26:10 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:45.167 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:45.167 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:45.424 Using 'verbs' RDMA provider 00:02:01.259 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:11.283 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:11.847 Creating mk/config.mk...done. 00:02:11.847 Creating mk/cc.flags.mk...done. 00:02:11.847 Type 'make' to build. 00:02:11.847 12:26:37 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:11.847 12:26:37 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:11.847 12:26:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:11.847 12:26:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.847 ************************************ 00:02:11.847 START TEST make 00:02:11.847 ************************************ 00:02:11.847 12:26:37 make -- common/autotest_common.sh@1123 -- $ make -j10 00:02:12.105 make[1]: Nothing to be done for 'all'. 00:02:24.362 The Meson build system 00:02:24.362 Version: 1.3.1 00:02:24.362 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:24.362 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:24.362 Build type: native build 00:02:24.362 Program cat found: YES (/usr/bin/cat) 00:02:24.362 Project name: DPDK 00:02:24.362 Project version: 24.03.0 00:02:24.362 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:24.362 C linker for the host machine: cc ld.bfd 2.39-16 00:02:24.362 Host machine cpu family: x86_64 00:02:24.362 Host machine cpu: x86_64 00:02:24.362 Message: ## Building in Developer Mode ## 00:02:24.362 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:24.362 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:24.362 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:24.362 Program python3 found: YES (/usr/bin/python3) 00:02:24.362 Program cat found: YES (/usr/bin/cat) 00:02:24.362 Compiler for C supports arguments -march=native: YES 00:02:24.362 Checking for size of "void *" : 8 00:02:24.362 Checking for size of "void *" : 8 (cached) 00:02:24.362 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:24.362 Library m found: YES 00:02:24.362 Library numa found: YES 00:02:24.362 Has header "numaif.h" : YES 00:02:24.362 Library fdt found: NO 00:02:24.362 Library execinfo found: NO 00:02:24.362 Has header "execinfo.h" : YES 00:02:24.362 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:24.362 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:24.362 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:24.362 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:24.362 Run-time dependency openssl found: YES 3.0.9 00:02:24.362 Run-time dependency libpcap found: YES 1.10.4 00:02:24.362 Has header "pcap.h" with dependency libpcap: YES 00:02:24.362 Compiler for C supports arguments -Wcast-qual: YES 00:02:24.362 Compiler for C supports arguments -Wdeprecated: YES 00:02:24.362 Compiler for C supports arguments -Wformat: YES 00:02:24.362 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:24.362 Compiler for C supports arguments -Wformat-security: NO 00:02:24.362 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:24.362 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:24.362 Compiler for C supports arguments -Wnested-externs: YES 00:02:24.362 Compiler for C supports arguments -Wold-style-definition: YES 00:02:24.362 Compiler for C supports arguments -Wpointer-arith: YES 00:02:24.362 Compiler for C supports arguments -Wsign-compare: YES 00:02:24.362 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:24.362 Compiler for C supports arguments -Wundef: YES 00:02:24.362 Compiler for C supports arguments -Wwrite-strings: YES 00:02:24.362 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:24.362 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:24.362 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:24.362 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:24.362 Program objdump found: YES (/usr/bin/objdump) 00:02:24.362 Compiler for C supports arguments -mavx512f: YES 00:02:24.362 Checking if "AVX512 checking" compiles: YES 00:02:24.362 Fetching value of define "__SSE4_2__" : 1 00:02:24.362 Fetching value of define "__AES__" : 1 00:02:24.362 Fetching value of define "__AVX__" : 1 00:02:24.362 Fetching value of define "__AVX2__" : 1 00:02:24.362 Fetching value of define "__AVX512BW__" : (undefined) 00:02:24.362 Fetching value of define "__AVX512CD__" : (undefined) 00:02:24.362 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:24.362 Fetching value of define "__AVX512F__" : (undefined) 00:02:24.362 Fetching value of define "__AVX512VL__" : (undefined) 00:02:24.362 Fetching value of define "__PCLMUL__" : 1 00:02:24.362 Fetching value of define "__RDRND__" : 1 00:02:24.362 Fetching value of define "__RDSEED__" : 1 00:02:24.362 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:24.362 Fetching value of define "__znver1__" : (undefined) 00:02:24.362 Fetching value of define "__znver2__" : (undefined) 00:02:24.362 Fetching value of define "__znver3__" : (undefined) 00:02:24.362 Fetching value of define "__znver4__" : (undefined) 00:02:24.362 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:24.362 Message: lib/log: Defining dependency "log" 00:02:24.362 Message: lib/kvargs: Defining dependency "kvargs" 00:02:24.362 Message: lib/telemetry: Defining dependency "telemetry" 00:02:24.362 Checking for function "getentropy" : NO 00:02:24.362 Message: lib/eal: Defining dependency "eal" 00:02:24.362 Message: lib/ring: Defining dependency "ring" 00:02:24.362 Message: lib/rcu: Defining dependency "rcu" 00:02:24.362 Message: lib/mempool: Defining dependency "mempool" 00:02:24.362 Message: lib/mbuf: Defining dependency "mbuf" 00:02:24.362 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:24.362 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:24.362 Compiler for C supports arguments -mpclmul: YES 00:02:24.362 Compiler for C supports arguments -maes: YES 00:02:24.362 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:24.362 Compiler for C supports arguments -mavx512bw: YES 00:02:24.362 Compiler for C supports arguments -mavx512dq: YES 00:02:24.362 Compiler for C supports arguments -mavx512vl: YES 00:02:24.362 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:24.362 Compiler for C supports arguments -mavx2: YES 00:02:24.362 Compiler for C supports arguments -mavx: YES 00:02:24.362 Message: lib/net: Defining dependency "net" 00:02:24.362 Message: lib/meter: Defining dependency "meter" 00:02:24.362 Message: lib/ethdev: Defining dependency "ethdev" 00:02:24.362 Message: lib/pci: Defining dependency "pci" 00:02:24.362 Message: lib/cmdline: Defining dependency "cmdline" 00:02:24.362 Message: lib/hash: Defining dependency "hash" 00:02:24.362 Message: lib/timer: Defining dependency "timer" 00:02:24.362 Message: lib/compressdev: Defining dependency "compressdev" 00:02:24.362 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:24.362 Message: lib/dmadev: Defining dependency "dmadev" 00:02:24.362 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:24.362 Message: lib/power: Defining dependency "power" 00:02:24.362 Message: lib/reorder: Defining dependency "reorder" 00:02:24.362 Message: lib/security: Defining dependency "security" 00:02:24.362 Has header "linux/userfaultfd.h" : YES 00:02:24.362 Has header "linux/vduse.h" : YES 00:02:24.362 Message: lib/vhost: Defining dependency "vhost" 00:02:24.362 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:24.362 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:24.362 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:24.362 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:24.362 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:24.362 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:24.362 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:24.362 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:24.362 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:24.362 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:24.362 Program doxygen found: YES (/usr/bin/doxygen) 00:02:24.362 Configuring doxy-api-html.conf using configuration 00:02:24.362 Configuring doxy-api-man.conf using configuration 00:02:24.362 Program mandb found: YES (/usr/bin/mandb) 00:02:24.362 Program sphinx-build found: NO 00:02:24.362 Configuring rte_build_config.h using configuration 00:02:24.362 Message: 00:02:24.362 ================= 00:02:24.362 Applications Enabled 00:02:24.362 ================= 00:02:24.362 00:02:24.362 apps: 00:02:24.362 00:02:24.362 00:02:24.362 Message: 00:02:24.362 ================= 00:02:24.362 Libraries Enabled 00:02:24.362 ================= 00:02:24.362 00:02:24.362 libs: 00:02:24.362 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:24.362 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:24.362 cryptodev, dmadev, power, reorder, security, vhost, 00:02:24.362 00:02:24.362 Message: 00:02:24.362 =============== 00:02:24.362 Drivers Enabled 00:02:24.362 =============== 00:02:24.362 00:02:24.362 common: 00:02:24.362 00:02:24.362 bus: 00:02:24.362 pci, vdev, 00:02:24.362 mempool: 00:02:24.362 ring, 00:02:24.362 dma: 00:02:24.362 00:02:24.362 net: 00:02:24.362 00:02:24.362 crypto: 00:02:24.362 00:02:24.362 compress: 00:02:24.362 00:02:24.362 vdpa: 00:02:24.362 00:02:24.362 00:02:24.362 Message: 00:02:24.362 ================= 00:02:24.362 Content Skipped 00:02:24.362 ================= 00:02:24.362 00:02:24.362 apps: 00:02:24.362 dumpcap: explicitly disabled via build config 00:02:24.362 graph: explicitly disabled via build config 00:02:24.362 pdump: explicitly disabled via build config 00:02:24.362 proc-info: explicitly disabled via build config 00:02:24.362 test-acl: explicitly disabled via build config 00:02:24.362 test-bbdev: explicitly disabled via build config 00:02:24.362 test-cmdline: explicitly disabled via build config 00:02:24.362 test-compress-perf: explicitly disabled via build config 00:02:24.362 test-crypto-perf: explicitly disabled via build config 00:02:24.362 test-dma-perf: explicitly disabled via build config 00:02:24.362 test-eventdev: explicitly disabled via build config 00:02:24.362 test-fib: explicitly disabled via build config 00:02:24.362 test-flow-perf: explicitly disabled via build config 00:02:24.362 test-gpudev: explicitly disabled via build config 00:02:24.362 test-mldev: explicitly disabled via build config 00:02:24.363 test-pipeline: explicitly disabled via build config 00:02:24.363 test-pmd: explicitly disabled via build config 00:02:24.363 test-regex: explicitly disabled via build config 00:02:24.363 test-sad: explicitly disabled via build config 00:02:24.363 test-security-perf: explicitly disabled via build config 00:02:24.363 00:02:24.363 libs: 00:02:24.363 argparse: explicitly disabled via build config 00:02:24.363 metrics: explicitly disabled via build config 00:02:24.363 acl: explicitly disabled via build config 00:02:24.363 bbdev: explicitly disabled via build config 00:02:24.363 bitratestats: explicitly disabled via build config 00:02:24.363 bpf: explicitly disabled via build config 00:02:24.363 cfgfile: explicitly disabled via build config 00:02:24.363 distributor: explicitly disabled via build config 00:02:24.363 efd: explicitly disabled via build config 00:02:24.363 eventdev: explicitly disabled via build config 00:02:24.363 dispatcher: explicitly disabled via build config 00:02:24.363 gpudev: explicitly disabled via build config 00:02:24.363 gro: explicitly disabled via build config 00:02:24.363 gso: explicitly disabled via build config 00:02:24.363 ip_frag: explicitly disabled via build config 00:02:24.363 jobstats: explicitly disabled via build config 00:02:24.363 latencystats: explicitly disabled via build config 00:02:24.363 lpm: explicitly disabled via build config 00:02:24.363 member: explicitly disabled via build config 00:02:24.363 pcapng: explicitly disabled via build config 00:02:24.363 rawdev: explicitly disabled via build config 00:02:24.363 regexdev: explicitly disabled via build config 00:02:24.363 mldev: explicitly disabled via build config 00:02:24.363 rib: explicitly disabled via build config 00:02:24.363 sched: explicitly disabled via build config 00:02:24.363 stack: explicitly disabled via build config 00:02:24.363 ipsec: explicitly disabled via build config 00:02:24.363 pdcp: explicitly disabled via build config 00:02:24.363 fib: explicitly disabled via build config 00:02:24.363 port: explicitly disabled via build config 00:02:24.363 pdump: explicitly disabled via build config 00:02:24.363 table: explicitly disabled via build config 00:02:24.363 pipeline: explicitly disabled via build config 00:02:24.363 graph: explicitly disabled via build config 00:02:24.363 node: explicitly disabled via build config 00:02:24.363 00:02:24.363 drivers: 00:02:24.363 common/cpt: not in enabled drivers build config 00:02:24.363 common/dpaax: not in enabled drivers build config 00:02:24.363 common/iavf: not in enabled drivers build config 00:02:24.363 common/idpf: not in enabled drivers build config 00:02:24.363 common/ionic: not in enabled drivers build config 00:02:24.363 common/mvep: not in enabled drivers build config 00:02:24.363 common/octeontx: not in enabled drivers build config 00:02:24.363 bus/auxiliary: not in enabled drivers build config 00:02:24.363 bus/cdx: not in enabled drivers build config 00:02:24.363 bus/dpaa: not in enabled drivers build config 00:02:24.363 bus/fslmc: not in enabled drivers build config 00:02:24.363 bus/ifpga: not in enabled drivers build config 00:02:24.363 bus/platform: not in enabled drivers build config 00:02:24.363 bus/uacce: not in enabled drivers build config 00:02:24.363 bus/vmbus: not in enabled drivers build config 00:02:24.363 common/cnxk: not in enabled drivers build config 00:02:24.363 common/mlx5: not in enabled drivers build config 00:02:24.363 common/nfp: not in enabled drivers build config 00:02:24.363 common/nitrox: not in enabled drivers build config 00:02:24.363 common/qat: not in enabled drivers build config 00:02:24.363 common/sfc_efx: not in enabled drivers build config 00:02:24.363 mempool/bucket: not in enabled drivers build config 00:02:24.363 mempool/cnxk: not in enabled drivers build config 00:02:24.363 mempool/dpaa: not in enabled drivers build config 00:02:24.363 mempool/dpaa2: not in enabled drivers build config 00:02:24.363 mempool/octeontx: not in enabled drivers build config 00:02:24.363 mempool/stack: not in enabled drivers build config 00:02:24.363 dma/cnxk: not in enabled drivers build config 00:02:24.363 dma/dpaa: not in enabled drivers build config 00:02:24.363 dma/dpaa2: not in enabled drivers build config 00:02:24.363 dma/hisilicon: not in enabled drivers build config 00:02:24.363 dma/idxd: not in enabled drivers build config 00:02:24.363 dma/ioat: not in enabled drivers build config 00:02:24.363 dma/skeleton: not in enabled drivers build config 00:02:24.363 net/af_packet: not in enabled drivers build config 00:02:24.363 net/af_xdp: not in enabled drivers build config 00:02:24.363 net/ark: not in enabled drivers build config 00:02:24.363 net/atlantic: not in enabled drivers build config 00:02:24.363 net/avp: not in enabled drivers build config 00:02:24.363 net/axgbe: not in enabled drivers build config 00:02:24.363 net/bnx2x: not in enabled drivers build config 00:02:24.363 net/bnxt: not in enabled drivers build config 00:02:24.363 net/bonding: not in enabled drivers build config 00:02:24.363 net/cnxk: not in enabled drivers build config 00:02:24.363 net/cpfl: not in enabled drivers build config 00:02:24.363 net/cxgbe: not in enabled drivers build config 00:02:24.363 net/dpaa: not in enabled drivers build config 00:02:24.363 net/dpaa2: not in enabled drivers build config 00:02:24.363 net/e1000: not in enabled drivers build config 00:02:24.363 net/ena: not in enabled drivers build config 00:02:24.363 net/enetc: not in enabled drivers build config 00:02:24.363 net/enetfec: not in enabled drivers build config 00:02:24.363 net/enic: not in enabled drivers build config 00:02:24.363 net/failsafe: not in enabled drivers build config 00:02:24.363 net/fm10k: not in enabled drivers build config 00:02:24.363 net/gve: not in enabled drivers build config 00:02:24.363 net/hinic: not in enabled drivers build config 00:02:24.363 net/hns3: not in enabled drivers build config 00:02:24.363 net/i40e: not in enabled drivers build config 00:02:24.363 net/iavf: not in enabled drivers build config 00:02:24.363 net/ice: not in enabled drivers build config 00:02:24.363 net/idpf: not in enabled drivers build config 00:02:24.363 net/igc: not in enabled drivers build config 00:02:24.363 net/ionic: not in enabled drivers build config 00:02:24.363 net/ipn3ke: not in enabled drivers build config 00:02:24.363 net/ixgbe: not in enabled drivers build config 00:02:24.363 net/mana: not in enabled drivers build config 00:02:24.363 net/memif: not in enabled drivers build config 00:02:24.363 net/mlx4: not in enabled drivers build config 00:02:24.363 net/mlx5: not in enabled drivers build config 00:02:24.363 net/mvneta: not in enabled drivers build config 00:02:24.363 net/mvpp2: not in enabled drivers build config 00:02:24.363 net/netvsc: not in enabled drivers build config 00:02:24.363 net/nfb: not in enabled drivers build config 00:02:24.363 net/nfp: not in enabled drivers build config 00:02:24.363 net/ngbe: not in enabled drivers build config 00:02:24.363 net/null: not in enabled drivers build config 00:02:24.363 net/octeontx: not in enabled drivers build config 00:02:24.363 net/octeon_ep: not in enabled drivers build config 00:02:24.363 net/pcap: not in enabled drivers build config 00:02:24.363 net/pfe: not in enabled drivers build config 00:02:24.363 net/qede: not in enabled drivers build config 00:02:24.363 net/ring: not in enabled drivers build config 00:02:24.363 net/sfc: not in enabled drivers build config 00:02:24.363 net/softnic: not in enabled drivers build config 00:02:24.363 net/tap: not in enabled drivers build config 00:02:24.363 net/thunderx: not in enabled drivers build config 00:02:24.363 net/txgbe: not in enabled drivers build config 00:02:24.363 net/vdev_netvsc: not in enabled drivers build config 00:02:24.363 net/vhost: not in enabled drivers build config 00:02:24.363 net/virtio: not in enabled drivers build config 00:02:24.363 net/vmxnet3: not in enabled drivers build config 00:02:24.363 raw/*: missing internal dependency, "rawdev" 00:02:24.363 crypto/armv8: not in enabled drivers build config 00:02:24.363 crypto/bcmfs: not in enabled drivers build config 00:02:24.363 crypto/caam_jr: not in enabled drivers build config 00:02:24.363 crypto/ccp: not in enabled drivers build config 00:02:24.363 crypto/cnxk: not in enabled drivers build config 00:02:24.363 crypto/dpaa_sec: not in enabled drivers build config 00:02:24.363 crypto/dpaa2_sec: not in enabled drivers build config 00:02:24.363 crypto/ipsec_mb: not in enabled drivers build config 00:02:24.363 crypto/mlx5: not in enabled drivers build config 00:02:24.363 crypto/mvsam: not in enabled drivers build config 00:02:24.363 crypto/nitrox: not in enabled drivers build config 00:02:24.363 crypto/null: not in enabled drivers build config 00:02:24.363 crypto/octeontx: not in enabled drivers build config 00:02:24.363 crypto/openssl: not in enabled drivers build config 00:02:24.363 crypto/scheduler: not in enabled drivers build config 00:02:24.363 crypto/uadk: not in enabled drivers build config 00:02:24.363 crypto/virtio: not in enabled drivers build config 00:02:24.363 compress/isal: not in enabled drivers build config 00:02:24.363 compress/mlx5: not in enabled drivers build config 00:02:24.363 compress/nitrox: not in enabled drivers build config 00:02:24.363 compress/octeontx: not in enabled drivers build config 00:02:24.363 compress/zlib: not in enabled drivers build config 00:02:24.363 regex/*: missing internal dependency, "regexdev" 00:02:24.363 ml/*: missing internal dependency, "mldev" 00:02:24.363 vdpa/ifc: not in enabled drivers build config 00:02:24.363 vdpa/mlx5: not in enabled drivers build config 00:02:24.363 vdpa/nfp: not in enabled drivers build config 00:02:24.363 vdpa/sfc: not in enabled drivers build config 00:02:24.363 event/*: missing internal dependency, "eventdev" 00:02:24.363 baseband/*: missing internal dependency, "bbdev" 00:02:24.363 gpu/*: missing internal dependency, "gpudev" 00:02:24.363 00:02:24.363 00:02:24.621 Build targets in project: 85 00:02:24.621 00:02:24.621 DPDK 24.03.0 00:02:24.621 00:02:24.621 User defined options 00:02:24.621 buildtype : debug 00:02:24.621 default_library : shared 00:02:24.621 libdir : lib 00:02:24.621 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:24.621 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:24.621 c_link_args : 00:02:24.621 cpu_instruction_set: native 00:02:24.621 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:24.621 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:24.621 enable_docs : false 00:02:24.621 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:24.621 enable_kmods : false 00:02:24.621 max_lcores : 128 00:02:24.621 tests : false 00:02:24.621 00:02:24.621 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:24.879 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:24.879 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:25.137 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:25.137 [3/268] Linking static target lib/librte_kvargs.a 00:02:25.137 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:25.137 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:25.137 [6/268] Linking static target lib/librte_log.a 00:02:25.396 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.655 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:25.915 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:25.915 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:25.915 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:25.915 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:25.915 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:25.915 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:25.915 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:25.915 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:25.915 [17/268] Linking static target lib/librte_telemetry.a 00:02:25.915 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.173 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:26.173 [20/268] Linking target lib/librte_log.so.24.1 00:02:26.432 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:26.432 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:26.691 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:26.691 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:26.691 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:26.691 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:26.691 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:26.691 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:26.950 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:26.950 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:26.950 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:26.950 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.950 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:26.950 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:27.208 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:27.208 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:27.465 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:27.723 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:27.723 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:27.723 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:27.723 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:27.723 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:27.723 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:27.723 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:27.723 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:27.723 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:27.723 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:27.982 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:28.241 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:28.241 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:28.500 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:28.500 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:28.500 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:28.759 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:28.759 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:28.759 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:28.759 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:28.759 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:28.759 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:29.018 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:29.276 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:29.276 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:29.276 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:29.276 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:29.535 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:29.535 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:29.535 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:29.793 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:29.793 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:30.052 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:30.052 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:30.052 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:30.311 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:30.311 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:30.311 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:30.311 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:30.311 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:30.569 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:30.569 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:30.569 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:30.569 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:30.827 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:30.827 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:30.827 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:30.827 [85/268] Linking static target lib/librte_eal.a 00:02:30.827 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:30.827 [87/268] Linking static target lib/librte_ring.a 00:02:31.085 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:31.085 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:31.085 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:31.085 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:31.085 [92/268] Linking static target lib/librte_rcu.a 00:02:31.343 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:31.343 [94/268] Linking static target lib/librte_mempool.a 00:02:31.343 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:31.343 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:31.343 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.601 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:31.601 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:31.860 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.860 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:31.860 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:31.860 [103/268] Linking static target lib/librte_mbuf.a 00:02:31.860 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:31.860 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:32.119 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:32.119 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:32.119 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:32.119 [109/268] Linking static target lib/librte_net.a 00:02:32.377 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:32.377 [111/268] Linking static target lib/librte_meter.a 00:02:32.635 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.635 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:32.635 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.635 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:32.635 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:32.893 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.893 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:33.151 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.410 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:33.668 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:33.668 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:33.668 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:33.926 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:33.926 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:33.926 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:34.183 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:34.183 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:34.183 [129/268] Linking static target lib/librte_pci.a 00:02:34.183 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:34.183 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:34.183 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:34.183 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:34.183 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:34.183 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:34.442 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:34.442 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:34.442 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:34.442 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:34.442 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:34.442 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:34.442 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:34.442 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:34.442 [144/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.700 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:34.700 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:34.700 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:34.700 [148/268] Linking static target lib/librte_ethdev.a 00:02:34.957 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:34.957 [150/268] Linking static target lib/librte_cmdline.a 00:02:34.957 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:34.957 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:35.215 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:35.215 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:35.215 [155/268] Linking static target lib/librte_timer.a 00:02:35.215 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:35.474 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:35.474 [158/268] Linking static target lib/librte_hash.a 00:02:35.732 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:35.733 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:35.733 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:35.990 [162/268] Linking static target lib/librte_compressdev.a 00:02:35.990 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.990 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:35.990 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:36.249 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:36.249 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:36.249 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:36.249 [169/268] Linking static target lib/librte_dmadev.a 00:02:36.507 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:36.507 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:36.507 [172/268] Linking static target lib/librte_cryptodev.a 00:02:36.507 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.507 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.507 [175/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:36.507 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:36.765 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.032 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:37.032 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:37.032 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:37.032 [181/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:37.032 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.292 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:37.292 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:37.550 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:37.550 [186/268] Linking static target lib/librte_power.a 00:02:37.550 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:37.550 [188/268] Linking static target lib/librte_reorder.a 00:02:37.808 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:37.808 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:37.808 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:37.808 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:37.808 [193/268] Linking static target lib/librte_security.a 00:02:38.066 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.066 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:38.327 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.327 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.585 [198/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.585 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:38.585 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:38.585 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:38.843 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:38.843 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:39.101 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:39.101 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:39.101 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:39.101 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:39.359 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:39.359 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:39.359 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:39.359 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:39.359 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:39.618 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:39.618 [214/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:39.618 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:39.618 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:39.618 [217/268] Linking static target drivers/librte_bus_vdev.a 00:02:39.618 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:39.618 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:39.618 [220/268] Linking static target drivers/librte_bus_pci.a 00:02:39.618 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:39.618 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:39.875 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.875 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:39.875 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:39.875 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:39.875 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:40.132 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.700 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:40.700 [230/268] Linking static target lib/librte_vhost.a 00:02:41.662 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.662 [232/268] Linking target lib/librte_eal.so.24.1 00:02:41.963 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:41.963 [234/268] Linking target lib/librte_meter.so.24.1 00:02:41.963 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:41.963 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:41.963 [237/268] Linking target lib/librte_pci.so.24.1 00:02:41.963 [238/268] Linking target lib/librte_timer.so.24.1 00:02:41.963 [239/268] Linking target lib/librte_ring.so.24.1 00:02:41.963 [240/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.220 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:42.220 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:42.220 [243/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.221 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:42.221 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:42.221 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:42.221 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:42.221 [248/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:42.221 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:42.221 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:42.478 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:42.478 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:42.478 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:42.478 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:42.478 [255/268] Linking target lib/librte_net.so.24.1 00:02:42.478 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:42.736 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:42.736 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:42.736 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:42.736 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:42.736 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:42.736 [262/268] Linking target lib/librte_hash.so.24.1 00:02:42.736 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:42.736 [264/268] Linking target lib/librte_security.so.24.1 00:02:42.994 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:42.994 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:42.994 [267/268] Linking target lib/librte_power.so.24.1 00:02:42.994 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:42.994 INFO: autodetecting backend as ninja 00:02:42.994 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:44.368 CC lib/ut/ut.o 00:02:44.368 CC lib/ut_mock/mock.o 00:02:44.368 CC lib/log/log.o 00:02:44.368 CC lib/log/log_flags.o 00:02:44.368 CC lib/log/log_deprecated.o 00:02:44.368 LIB libspdk_log.a 00:02:44.368 LIB libspdk_ut.a 00:02:44.368 LIB libspdk_ut_mock.a 00:02:44.368 SO libspdk_ut.so.2.0 00:02:44.368 SO libspdk_ut_mock.so.6.0 00:02:44.368 SO libspdk_log.so.7.0 00:02:44.624 SYMLINK libspdk_ut.so 00:02:44.624 SYMLINK libspdk_ut_mock.so 00:02:44.624 SYMLINK libspdk_log.so 00:02:44.882 CC lib/util/base64.o 00:02:44.882 CC lib/util/bit_array.o 00:02:44.882 CC lib/util/cpuset.o 00:02:44.882 CC lib/ioat/ioat.o 00:02:44.882 CC lib/util/crc16.o 00:02:44.882 CXX lib/trace_parser/trace.o 00:02:44.882 CC lib/dma/dma.o 00:02:44.882 CC lib/util/crc32.o 00:02:44.882 CC lib/util/crc32c.o 00:02:44.882 CC lib/vfio_user/host/vfio_user_pci.o 00:02:44.882 CC lib/util/crc32_ieee.o 00:02:44.882 CC lib/util/crc64.o 00:02:44.882 CC lib/vfio_user/host/vfio_user.o 00:02:44.882 LIB libspdk_dma.a 00:02:44.882 CC lib/util/dif.o 00:02:45.139 SO libspdk_dma.so.4.0 00:02:45.139 CC lib/util/fd.o 00:02:45.139 CC lib/util/file.o 00:02:45.139 SYMLINK libspdk_dma.so 00:02:45.139 CC lib/util/hexlify.o 00:02:45.139 CC lib/util/iov.o 00:02:45.139 LIB libspdk_ioat.a 00:02:45.139 CC lib/util/math.o 00:02:45.139 SO libspdk_ioat.so.7.0 00:02:45.139 CC lib/util/pipe.o 00:02:45.139 LIB libspdk_vfio_user.a 00:02:45.139 CC lib/util/strerror_tls.o 00:02:45.139 SO libspdk_vfio_user.so.5.0 00:02:45.139 SYMLINK libspdk_ioat.so 00:02:45.139 CC lib/util/string.o 00:02:45.139 CC lib/util/uuid.o 00:02:45.139 CC lib/util/fd_group.o 00:02:45.139 CC lib/util/xor.o 00:02:45.397 SYMLINK libspdk_vfio_user.so 00:02:45.397 CC lib/util/zipf.o 00:02:45.397 LIB libspdk_util.a 00:02:45.655 SO libspdk_util.so.9.1 00:02:45.911 SYMLINK libspdk_util.so 00:02:45.911 LIB libspdk_trace_parser.a 00:02:45.911 SO libspdk_trace_parser.so.5.0 00:02:45.911 CC lib/conf/conf.o 00:02:45.911 CC lib/idxd/idxd.o 00:02:45.911 CC lib/rdma_utils/rdma_utils.o 00:02:45.911 CC lib/rdma_provider/common.o 00:02:45.911 CC lib/idxd/idxd_user.o 00:02:45.911 CC lib/idxd/idxd_kernel.o 00:02:45.911 CC lib/json/json_parse.o 00:02:45.911 CC lib/env_dpdk/env.o 00:02:46.166 CC lib/vmd/vmd.o 00:02:46.166 SYMLINK libspdk_trace_parser.so 00:02:46.166 CC lib/vmd/led.o 00:02:46.166 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:46.166 CC lib/json/json_util.o 00:02:46.166 LIB libspdk_conf.a 00:02:46.166 SO libspdk_conf.so.6.0 00:02:46.166 CC lib/env_dpdk/memory.o 00:02:46.422 LIB libspdk_rdma_utils.a 00:02:46.422 SO libspdk_rdma_utils.so.1.0 00:02:46.422 SYMLINK libspdk_conf.so 00:02:46.422 CC lib/env_dpdk/pci.o 00:02:46.422 CC lib/json/json_write.o 00:02:46.422 CC lib/env_dpdk/init.o 00:02:46.422 SYMLINK libspdk_rdma_utils.so 00:02:46.422 CC lib/env_dpdk/threads.o 00:02:46.422 CC lib/env_dpdk/pci_ioat.o 00:02:46.422 LIB libspdk_rdma_provider.a 00:02:46.422 LIB libspdk_idxd.a 00:02:46.679 SO libspdk_rdma_provider.so.6.0 00:02:46.679 SO libspdk_idxd.so.12.0 00:02:46.679 CC lib/env_dpdk/pci_virtio.o 00:02:46.679 SYMLINK libspdk_idxd.so 00:02:46.679 SYMLINK libspdk_rdma_provider.so 00:02:46.679 CC lib/env_dpdk/pci_vmd.o 00:02:46.679 CC lib/env_dpdk/pci_idxd.o 00:02:46.679 CC lib/env_dpdk/pci_event.o 00:02:46.679 CC lib/env_dpdk/sigbus_handler.o 00:02:46.679 LIB libspdk_vmd.a 00:02:46.936 CC lib/env_dpdk/pci_dpdk.o 00:02:46.936 SO libspdk_vmd.so.6.0 00:02:46.936 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:46.936 SYMLINK libspdk_vmd.so 00:02:46.936 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:46.936 LIB libspdk_json.a 00:02:46.936 SO libspdk_json.so.6.0 00:02:46.936 SYMLINK libspdk_json.so 00:02:47.191 CC lib/jsonrpc/jsonrpc_server.o 00:02:47.191 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:47.191 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:47.191 CC lib/jsonrpc/jsonrpc_client.o 00:02:47.447 LIB libspdk_env_dpdk.a 00:02:47.447 SO libspdk_env_dpdk.so.14.1 00:02:47.704 LIB libspdk_jsonrpc.a 00:02:47.704 SO libspdk_jsonrpc.so.6.0 00:02:47.704 SYMLINK libspdk_env_dpdk.so 00:02:47.704 SYMLINK libspdk_jsonrpc.so 00:02:47.962 CC lib/rpc/rpc.o 00:02:48.218 LIB libspdk_rpc.a 00:02:48.218 SO libspdk_rpc.so.6.0 00:02:48.218 SYMLINK libspdk_rpc.so 00:02:48.475 CC lib/notify/notify.o 00:02:48.475 CC lib/notify/notify_rpc.o 00:02:48.475 CC lib/keyring/keyring.o 00:02:48.475 CC lib/keyring/keyring_rpc.o 00:02:48.475 CC lib/trace/trace_flags.o 00:02:48.475 CC lib/trace/trace.o 00:02:48.475 CC lib/trace/trace_rpc.o 00:02:48.732 LIB libspdk_keyring.a 00:02:48.732 LIB libspdk_notify.a 00:02:48.989 SO libspdk_keyring.so.1.0 00:02:48.989 SO libspdk_notify.so.6.0 00:02:48.989 LIB libspdk_trace.a 00:02:48.989 SO libspdk_trace.so.10.0 00:02:48.989 SYMLINK libspdk_keyring.so 00:02:48.989 SYMLINK libspdk_notify.so 00:02:48.989 SYMLINK libspdk_trace.so 00:02:49.246 CC lib/sock/sock.o 00:02:49.246 CC lib/sock/sock_rpc.o 00:02:49.246 CC lib/thread/thread.o 00:02:49.246 CC lib/thread/iobuf.o 00:02:49.560 LIB libspdk_sock.a 00:02:49.822 SO libspdk_sock.so.10.0 00:02:49.822 SYMLINK libspdk_sock.so 00:02:50.079 CC lib/nvme/nvme_ctrlr.o 00:02:50.079 CC lib/nvme/nvme_fabric.o 00:02:50.079 CC lib/nvme/nvme_ns_cmd.o 00:02:50.079 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:50.079 CC lib/nvme/nvme_pcie.o 00:02:50.079 CC lib/nvme/nvme_pcie_common.o 00:02:50.079 CC lib/nvme/nvme_qpair.o 00:02:50.079 CC lib/nvme/nvme.o 00:02:50.079 CC lib/nvme/nvme_ns.o 00:02:51.024 LIB libspdk_thread.a 00:02:51.024 SO libspdk_thread.so.10.1 00:02:51.024 SYMLINK libspdk_thread.so 00:02:51.024 CC lib/nvme/nvme_quirks.o 00:02:51.024 CC lib/nvme/nvme_transport.o 00:02:51.024 CC lib/nvme/nvme_discovery.o 00:02:51.024 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:51.024 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:51.282 CC lib/accel/accel.o 00:02:51.282 CC lib/nvme/nvme_tcp.o 00:02:51.282 CC lib/nvme/nvme_opal.o 00:02:51.282 CC lib/accel/accel_rpc.o 00:02:51.540 CC lib/nvme/nvme_io_msg.o 00:02:51.540 CC lib/nvme/nvme_poll_group.o 00:02:51.540 CC lib/accel/accel_sw.o 00:02:51.797 CC lib/nvme/nvme_zns.o 00:02:51.797 CC lib/nvme/nvme_stubs.o 00:02:51.797 CC lib/blob/blobstore.o 00:02:51.797 CC lib/init/json_config.o 00:02:51.797 CC lib/init/subsystem.o 00:02:52.055 CC lib/init/subsystem_rpc.o 00:02:52.055 CC lib/blob/request.o 00:02:52.055 LIB libspdk_accel.a 00:02:52.313 CC lib/blob/zeroes.o 00:02:52.313 SO libspdk_accel.so.15.1 00:02:52.313 CC lib/init/rpc.o 00:02:52.313 CC lib/nvme/nvme_auth.o 00:02:52.313 SYMLINK libspdk_accel.so 00:02:52.313 CC lib/blob/blob_bs_dev.o 00:02:52.313 CC lib/nvme/nvme_cuse.o 00:02:52.313 LIB libspdk_init.a 00:02:52.313 CC lib/nvme/nvme_rdma.o 00:02:52.571 SO libspdk_init.so.5.0 00:02:52.571 SYMLINK libspdk_init.so 00:02:52.571 CC lib/virtio/virtio.o 00:02:52.571 CC lib/virtio/virtio_vhost_user.o 00:02:52.571 CC lib/virtio/virtio_vfio_user.o 00:02:52.571 CC lib/bdev/bdev.o 00:02:52.830 CC lib/virtio/virtio_pci.o 00:02:52.830 CC lib/event/app.o 00:02:52.830 CC lib/event/reactor.o 00:02:52.830 CC lib/event/log_rpc.o 00:02:52.830 CC lib/bdev/bdev_rpc.o 00:02:53.088 CC lib/event/app_rpc.o 00:02:53.088 LIB libspdk_virtio.a 00:02:53.088 SO libspdk_virtio.so.7.0 00:02:53.088 SYMLINK libspdk_virtio.so 00:02:53.088 CC lib/event/scheduler_static.o 00:02:53.088 CC lib/bdev/bdev_zone.o 00:02:53.088 CC lib/bdev/part.o 00:02:53.346 CC lib/bdev/scsi_nvme.o 00:02:53.346 LIB libspdk_event.a 00:02:53.346 SO libspdk_event.so.14.0 00:02:53.346 SYMLINK libspdk_event.so 00:02:53.913 LIB libspdk_nvme.a 00:02:53.913 SO libspdk_nvme.so.13.1 00:02:54.218 SYMLINK libspdk_nvme.so 00:02:54.787 LIB libspdk_blob.a 00:02:54.787 SO libspdk_blob.so.11.0 00:02:55.044 SYMLINK libspdk_blob.so 00:02:55.303 LIB libspdk_bdev.a 00:02:55.303 CC lib/lvol/lvol.o 00:02:55.303 CC lib/blobfs/blobfs.o 00:02:55.303 CC lib/blobfs/tree.o 00:02:55.303 SO libspdk_bdev.so.15.1 00:02:55.303 SYMLINK libspdk_bdev.so 00:02:55.561 CC lib/nvmf/ctrlr.o 00:02:55.561 CC lib/nvmf/ctrlr_discovery.o 00:02:55.561 CC lib/nvmf/ctrlr_bdev.o 00:02:55.561 CC lib/nbd/nbd.o 00:02:55.561 CC lib/nbd/nbd_rpc.o 00:02:55.561 CC lib/scsi/dev.o 00:02:55.561 CC lib/ublk/ublk.o 00:02:55.561 CC lib/ftl/ftl_core.o 00:02:55.820 CC lib/ftl/ftl_init.o 00:02:55.820 CC lib/scsi/lun.o 00:02:56.078 CC lib/scsi/port.o 00:02:56.078 LIB libspdk_nbd.a 00:02:56.078 CC lib/ftl/ftl_layout.o 00:02:56.078 SO libspdk_nbd.so.7.0 00:02:56.078 CC lib/nvmf/subsystem.o 00:02:56.078 SYMLINK libspdk_nbd.so 00:02:56.078 LIB libspdk_lvol.a 00:02:56.078 CC lib/ftl/ftl_debug.o 00:02:56.336 CC lib/ftl/ftl_io.o 00:02:56.336 SO libspdk_lvol.so.10.0 00:02:56.336 CC lib/scsi/scsi.o 00:02:56.336 CC lib/ublk/ublk_rpc.o 00:02:56.336 SYMLINK libspdk_lvol.so 00:02:56.336 CC lib/scsi/scsi_bdev.o 00:02:56.336 LIB libspdk_blobfs.a 00:02:56.336 SO libspdk_blobfs.so.10.0 00:02:56.336 CC lib/nvmf/nvmf.o 00:02:56.336 CC lib/nvmf/nvmf_rpc.o 00:02:56.336 SYMLINK libspdk_blobfs.so 00:02:56.336 CC lib/ftl/ftl_sb.o 00:02:56.336 CC lib/scsi/scsi_pr.o 00:02:56.336 CC lib/scsi/scsi_rpc.o 00:02:56.336 LIB libspdk_ublk.a 00:02:56.595 CC lib/ftl/ftl_l2p.o 00:02:56.595 SO libspdk_ublk.so.3.0 00:02:56.595 SYMLINK libspdk_ublk.so 00:02:56.595 CC lib/scsi/task.o 00:02:56.595 CC lib/ftl/ftl_l2p_flat.o 00:02:56.595 CC lib/ftl/ftl_nv_cache.o 00:02:56.853 CC lib/nvmf/transport.o 00:02:56.853 CC lib/nvmf/tcp.o 00:02:56.853 CC lib/ftl/ftl_band.o 00:02:56.853 LIB libspdk_scsi.a 00:02:56.853 CC lib/ftl/ftl_band_ops.o 00:02:56.853 SO libspdk_scsi.so.9.0 00:02:57.109 SYMLINK libspdk_scsi.so 00:02:57.109 CC lib/nvmf/stubs.o 00:02:57.109 CC lib/nvmf/mdns_server.o 00:02:57.366 CC lib/nvmf/rdma.o 00:02:57.366 CC lib/iscsi/conn.o 00:02:57.366 CC lib/nvmf/auth.o 00:02:57.366 CC lib/ftl/ftl_writer.o 00:02:57.366 CC lib/iscsi/init_grp.o 00:02:57.624 CC lib/ftl/ftl_rq.o 00:02:57.624 CC lib/iscsi/iscsi.o 00:02:57.624 CC lib/iscsi/md5.o 00:02:57.624 CC lib/vhost/vhost.o 00:02:57.624 CC lib/iscsi/param.o 00:02:57.624 CC lib/vhost/vhost_rpc.o 00:02:57.882 CC lib/ftl/ftl_reloc.o 00:02:57.882 CC lib/iscsi/portal_grp.o 00:02:57.882 CC lib/iscsi/tgt_node.o 00:02:58.146 CC lib/iscsi/iscsi_subsystem.o 00:02:58.146 CC lib/iscsi/iscsi_rpc.o 00:02:58.146 CC lib/ftl/ftl_l2p_cache.o 00:02:58.146 CC lib/vhost/vhost_scsi.o 00:02:58.416 CC lib/iscsi/task.o 00:02:58.416 CC lib/vhost/vhost_blk.o 00:02:58.416 CC lib/vhost/rte_vhost_user.o 00:02:58.416 CC lib/ftl/ftl_p2l.o 00:02:58.416 CC lib/ftl/mngt/ftl_mngt.o 00:02:58.686 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:58.686 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:58.686 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:58.945 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:58.945 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:58.945 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:58.945 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:58.945 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:58.945 LIB libspdk_iscsi.a 00:02:59.204 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:59.204 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:59.204 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:59.204 SO libspdk_iscsi.so.8.0 00:02:59.204 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:59.204 CC lib/ftl/utils/ftl_conf.o 00:02:59.462 CC lib/ftl/utils/ftl_md.o 00:02:59.462 CC lib/ftl/utils/ftl_mempool.o 00:02:59.462 SYMLINK libspdk_iscsi.so 00:02:59.462 CC lib/ftl/utils/ftl_bitmap.o 00:02:59.462 CC lib/ftl/utils/ftl_property.o 00:02:59.462 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:59.462 LIB libspdk_nvmf.a 00:02:59.462 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:59.462 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:59.721 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:59.721 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:59.721 LIB libspdk_vhost.a 00:02:59.721 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:59.721 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:59.721 SO libspdk_nvmf.so.18.1 00:02:59.721 SO libspdk_vhost.so.8.0 00:02:59.722 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:59.980 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:59.980 SYMLINK libspdk_vhost.so 00:02:59.980 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:59.980 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:59.980 CC lib/ftl/base/ftl_base_dev.o 00:02:59.980 CC lib/ftl/base/ftl_base_bdev.o 00:02:59.980 SYMLINK libspdk_nvmf.so 00:02:59.980 CC lib/ftl/ftl_trace.o 00:03:00.237 LIB libspdk_ftl.a 00:03:00.495 SO libspdk_ftl.so.9.0 00:03:00.754 SYMLINK libspdk_ftl.so 00:03:01.320 CC module/env_dpdk/env_dpdk_rpc.o 00:03:01.320 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:01.320 CC module/blob/bdev/blob_bdev.o 00:03:01.320 CC module/keyring/file/keyring.o 00:03:01.320 CC module/keyring/linux/keyring.o 00:03:01.320 CC module/sock/posix/posix.o 00:03:01.320 CC module/accel/error/accel_error.o 00:03:01.320 CC module/sock/uring/uring.o 00:03:01.320 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:01.320 CC module/scheduler/gscheduler/gscheduler.o 00:03:01.320 LIB libspdk_env_dpdk_rpc.a 00:03:01.320 SO libspdk_env_dpdk_rpc.so.6.0 00:03:01.320 CC module/keyring/linux/keyring_rpc.o 00:03:01.578 SYMLINK libspdk_env_dpdk_rpc.so 00:03:01.578 LIB libspdk_scheduler_dpdk_governor.a 00:03:01.578 CC module/accel/error/accel_error_rpc.o 00:03:01.578 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:01.578 LIB libspdk_scheduler_dynamic.a 00:03:01.578 CC module/keyring/file/keyring_rpc.o 00:03:01.578 SO libspdk_scheduler_dynamic.so.4.0 00:03:01.578 LIB libspdk_scheduler_gscheduler.a 00:03:01.578 LIB libspdk_blob_bdev.a 00:03:01.578 LIB libspdk_keyring_linux.a 00:03:01.578 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:01.578 SYMLINK libspdk_scheduler_dynamic.so 00:03:01.578 SO libspdk_keyring_linux.so.1.0 00:03:01.578 SO libspdk_blob_bdev.so.11.0 00:03:01.578 SO libspdk_scheduler_gscheduler.so.4.0 00:03:01.578 LIB libspdk_accel_error.a 00:03:01.578 CC module/accel/ioat/accel_ioat.o 00:03:01.578 SO libspdk_accel_error.so.2.0 00:03:01.578 SYMLINK libspdk_blob_bdev.so 00:03:01.578 SYMLINK libspdk_keyring_linux.so 00:03:01.836 SYMLINK libspdk_scheduler_gscheduler.so 00:03:01.836 SYMLINK libspdk_accel_error.so 00:03:01.836 CC module/accel/ioat/accel_ioat_rpc.o 00:03:01.836 LIB libspdk_keyring_file.a 00:03:01.836 CC module/accel/dsa/accel_dsa.o 00:03:01.836 CC module/accel/dsa/accel_dsa_rpc.o 00:03:01.836 SO libspdk_keyring_file.so.1.0 00:03:01.836 CC module/accel/iaa/accel_iaa.o 00:03:01.836 SYMLINK libspdk_keyring_file.so 00:03:01.836 LIB libspdk_accel_ioat.a 00:03:01.836 SO libspdk_accel_ioat.so.6.0 00:03:02.095 CC module/accel/iaa/accel_iaa_rpc.o 00:03:02.095 SYMLINK libspdk_accel_ioat.so 00:03:02.095 CC module/blobfs/bdev/blobfs_bdev.o 00:03:02.095 CC module/bdev/delay/vbdev_delay.o 00:03:02.095 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:02.095 CC module/bdev/error/vbdev_error.o 00:03:02.095 CC module/bdev/gpt/gpt.o 00:03:02.095 LIB libspdk_sock_posix.a 00:03:02.095 LIB libspdk_accel_iaa.a 00:03:02.095 SO libspdk_sock_posix.so.6.0 00:03:02.095 SO libspdk_accel_iaa.so.3.0 00:03:02.352 LIB libspdk_accel_dsa.a 00:03:02.352 CC module/bdev/lvol/vbdev_lvol.o 00:03:02.352 SYMLINK libspdk_sock_posix.so 00:03:02.352 SYMLINK libspdk_accel_iaa.so 00:03:02.352 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:02.352 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:02.352 SO libspdk_accel_dsa.so.5.0 00:03:02.352 LIB libspdk_blobfs_bdev.a 00:03:02.352 LIB libspdk_sock_uring.a 00:03:02.352 SO libspdk_blobfs_bdev.so.6.0 00:03:02.352 SO libspdk_sock_uring.so.5.0 00:03:02.352 CC module/bdev/error/vbdev_error_rpc.o 00:03:02.352 CC module/bdev/gpt/vbdev_gpt.o 00:03:02.352 SYMLINK libspdk_accel_dsa.so 00:03:02.352 SYMLINK libspdk_blobfs_bdev.so 00:03:02.610 SYMLINK libspdk_sock_uring.so 00:03:02.610 CC module/bdev/malloc/bdev_malloc.o 00:03:02.610 LIB libspdk_bdev_delay.a 00:03:02.610 SO libspdk_bdev_delay.so.6.0 00:03:02.610 LIB libspdk_bdev_error.a 00:03:02.610 SO libspdk_bdev_error.so.6.0 00:03:02.610 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:02.610 CC module/bdev/null/bdev_null.o 00:03:02.610 SYMLINK libspdk_bdev_delay.so 00:03:02.610 CC module/bdev/nvme/bdev_nvme.o 00:03:02.866 CC module/bdev/passthru/vbdev_passthru.o 00:03:02.866 CC module/bdev/raid/bdev_raid.o 00:03:02.866 LIB libspdk_bdev_gpt.a 00:03:02.866 SYMLINK libspdk_bdev_error.so 00:03:02.866 SO libspdk_bdev_gpt.so.6.0 00:03:02.866 LIB libspdk_bdev_lvol.a 00:03:02.866 SO libspdk_bdev_lvol.so.6.0 00:03:02.866 SYMLINK libspdk_bdev_gpt.so 00:03:02.866 LIB libspdk_bdev_malloc.a 00:03:02.866 CC module/bdev/split/vbdev_split.o 00:03:02.866 SYMLINK libspdk_bdev_lvol.so 00:03:02.866 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:02.866 SO libspdk_bdev_malloc.so.6.0 00:03:02.866 CC module/bdev/null/bdev_null_rpc.o 00:03:03.124 SYMLINK libspdk_bdev_malloc.so 00:03:03.124 CC module/bdev/split/vbdev_split_rpc.o 00:03:03.124 CC module/bdev/uring/bdev_uring.o 00:03:03.124 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:03.124 CC module/bdev/aio/bdev_aio.o 00:03:03.124 LIB libspdk_bdev_null.a 00:03:03.124 CC module/bdev/ftl/bdev_ftl.o 00:03:03.124 CC module/bdev/uring/bdev_uring_rpc.o 00:03:03.124 SO libspdk_bdev_null.so.6.0 00:03:03.124 LIB libspdk_bdev_split.a 00:03:03.124 LIB libspdk_bdev_passthru.a 00:03:03.381 SO libspdk_bdev_split.so.6.0 00:03:03.381 SO libspdk_bdev_passthru.so.6.0 00:03:03.381 SYMLINK libspdk_bdev_null.so 00:03:03.381 CC module/bdev/aio/bdev_aio_rpc.o 00:03:03.381 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:03.381 SYMLINK libspdk_bdev_split.so 00:03:03.381 CC module/bdev/raid/bdev_raid_rpc.o 00:03:03.381 CC module/bdev/raid/bdev_raid_sb.o 00:03:03.381 SYMLINK libspdk_bdev_passthru.so 00:03:03.381 LIB libspdk_bdev_uring.a 00:03:03.381 SO libspdk_bdev_uring.so.6.0 00:03:03.381 LIB libspdk_bdev_aio.a 00:03:03.381 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:03.381 LIB libspdk_bdev_zone_block.a 00:03:03.639 SO libspdk_bdev_aio.so.6.0 00:03:03.639 SO libspdk_bdev_zone_block.so.6.0 00:03:03.639 SYMLINK libspdk_bdev_uring.so 00:03:03.639 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:03.639 SYMLINK libspdk_bdev_aio.so 00:03:03.639 CC module/bdev/raid/raid0.o 00:03:03.639 CC module/bdev/raid/raid1.o 00:03:03.639 CC module/bdev/iscsi/bdev_iscsi.o 00:03:03.639 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:03.639 SYMLINK libspdk_bdev_zone_block.so 00:03:03.639 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:03.639 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:03.639 LIB libspdk_bdev_ftl.a 00:03:03.639 SO libspdk_bdev_ftl.so.6.0 00:03:03.896 SYMLINK libspdk_bdev_ftl.so 00:03:03.896 CC module/bdev/raid/concat.o 00:03:03.896 CC module/bdev/nvme/nvme_rpc.o 00:03:03.896 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:03.896 CC module/bdev/nvme/bdev_mdns_client.o 00:03:03.896 CC module/bdev/nvme/vbdev_opal.o 00:03:03.896 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:03.896 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:03.896 LIB libspdk_bdev_iscsi.a 00:03:04.152 LIB libspdk_bdev_raid.a 00:03:04.152 SO libspdk_bdev_iscsi.so.6.0 00:03:04.152 SO libspdk_bdev_raid.so.6.0 00:03:04.152 SYMLINK libspdk_bdev_iscsi.so 00:03:04.152 LIB libspdk_bdev_virtio.a 00:03:04.152 SO libspdk_bdev_virtio.so.6.0 00:03:04.152 SYMLINK libspdk_bdev_raid.so 00:03:04.152 SYMLINK libspdk_bdev_virtio.so 00:03:05.087 LIB libspdk_bdev_nvme.a 00:03:05.087 SO libspdk_bdev_nvme.so.7.0 00:03:05.087 SYMLINK libspdk_bdev_nvme.so 00:03:05.652 CC module/event/subsystems/scheduler/scheduler.o 00:03:05.652 CC module/event/subsystems/iobuf/iobuf.o 00:03:05.652 CC module/event/subsystems/sock/sock.o 00:03:05.652 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:05.652 CC module/event/subsystems/vmd/vmd.o 00:03:05.652 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:05.652 CC module/event/subsystems/keyring/keyring.o 00:03:05.652 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:05.935 LIB libspdk_event_sock.a 00:03:05.935 LIB libspdk_event_scheduler.a 00:03:05.935 LIB libspdk_event_vhost_blk.a 00:03:05.935 LIB libspdk_event_iobuf.a 00:03:05.935 LIB libspdk_event_vmd.a 00:03:05.935 LIB libspdk_event_keyring.a 00:03:05.935 SO libspdk_event_vhost_blk.so.3.0 00:03:05.935 SO libspdk_event_scheduler.so.4.0 00:03:05.935 SO libspdk_event_sock.so.5.0 00:03:05.935 SO libspdk_event_keyring.so.1.0 00:03:05.935 SO libspdk_event_vmd.so.6.0 00:03:05.935 SO libspdk_event_iobuf.so.3.0 00:03:05.935 SYMLINK libspdk_event_vhost_blk.so 00:03:05.935 SYMLINK libspdk_event_keyring.so 00:03:05.935 SYMLINK libspdk_event_scheduler.so 00:03:05.935 SYMLINK libspdk_event_sock.so 00:03:05.935 SYMLINK libspdk_event_vmd.so 00:03:05.935 SYMLINK libspdk_event_iobuf.so 00:03:06.193 CC module/event/subsystems/accel/accel.o 00:03:06.451 LIB libspdk_event_accel.a 00:03:06.451 SO libspdk_event_accel.so.6.0 00:03:06.451 SYMLINK libspdk_event_accel.so 00:03:06.708 CC module/event/subsystems/bdev/bdev.o 00:03:06.966 LIB libspdk_event_bdev.a 00:03:06.966 SO libspdk_event_bdev.so.6.0 00:03:06.966 SYMLINK libspdk_event_bdev.so 00:03:07.224 CC module/event/subsystems/scsi/scsi.o 00:03:07.224 CC module/event/subsystems/nbd/nbd.o 00:03:07.224 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:07.224 CC module/event/subsystems/ublk/ublk.o 00:03:07.224 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:07.481 LIB libspdk_event_ublk.a 00:03:07.481 LIB libspdk_event_nbd.a 00:03:07.481 LIB libspdk_event_scsi.a 00:03:07.481 SO libspdk_event_ublk.so.3.0 00:03:07.481 SO libspdk_event_nbd.so.6.0 00:03:07.481 SO libspdk_event_scsi.so.6.0 00:03:07.481 SYMLINK libspdk_event_ublk.so 00:03:07.481 SYMLINK libspdk_event_nbd.so 00:03:07.481 LIB libspdk_event_nvmf.a 00:03:07.481 SYMLINK libspdk_event_scsi.so 00:03:07.481 SO libspdk_event_nvmf.so.6.0 00:03:07.739 SYMLINK libspdk_event_nvmf.so 00:03:07.739 CC module/event/subsystems/iscsi/iscsi.o 00:03:07.739 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:07.997 LIB libspdk_event_vhost_scsi.a 00:03:07.997 SO libspdk_event_vhost_scsi.so.3.0 00:03:07.997 LIB libspdk_event_iscsi.a 00:03:07.997 SO libspdk_event_iscsi.so.6.0 00:03:07.997 SYMLINK libspdk_event_vhost_scsi.so 00:03:07.997 SYMLINK libspdk_event_iscsi.so 00:03:08.255 SO libspdk.so.6.0 00:03:08.255 SYMLINK libspdk.so 00:03:08.513 TEST_HEADER include/spdk/accel.h 00:03:08.513 TEST_HEADER include/spdk/accel_module.h 00:03:08.513 TEST_HEADER include/spdk/assert.h 00:03:08.513 TEST_HEADER include/spdk/barrier.h 00:03:08.513 TEST_HEADER include/spdk/base64.h 00:03:08.513 TEST_HEADER include/spdk/bdev.h 00:03:08.513 TEST_HEADER include/spdk/bdev_module.h 00:03:08.513 CXX app/trace/trace.o 00:03:08.513 TEST_HEADER include/spdk/bdev_zone.h 00:03:08.513 CC test/rpc_client/rpc_client_test.o 00:03:08.513 TEST_HEADER include/spdk/bit_array.h 00:03:08.513 TEST_HEADER include/spdk/bit_pool.h 00:03:08.513 TEST_HEADER include/spdk/blob_bdev.h 00:03:08.513 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:08.513 TEST_HEADER include/spdk/blobfs.h 00:03:08.513 TEST_HEADER include/spdk/blob.h 00:03:08.513 TEST_HEADER include/spdk/conf.h 00:03:08.513 TEST_HEADER include/spdk/config.h 00:03:08.513 TEST_HEADER include/spdk/cpuset.h 00:03:08.513 TEST_HEADER include/spdk/crc16.h 00:03:08.513 TEST_HEADER include/spdk/crc32.h 00:03:08.513 TEST_HEADER include/spdk/crc64.h 00:03:08.513 TEST_HEADER include/spdk/dif.h 00:03:08.513 TEST_HEADER include/spdk/dma.h 00:03:08.513 TEST_HEADER include/spdk/endian.h 00:03:08.513 TEST_HEADER include/spdk/env_dpdk.h 00:03:08.513 TEST_HEADER include/spdk/env.h 00:03:08.513 TEST_HEADER include/spdk/event.h 00:03:08.513 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:08.513 TEST_HEADER include/spdk/fd_group.h 00:03:08.513 TEST_HEADER include/spdk/fd.h 00:03:08.513 TEST_HEADER include/spdk/file.h 00:03:08.513 TEST_HEADER include/spdk/ftl.h 00:03:08.513 TEST_HEADER include/spdk/gpt_spec.h 00:03:08.513 TEST_HEADER include/spdk/hexlify.h 00:03:08.513 TEST_HEADER include/spdk/histogram_data.h 00:03:08.513 TEST_HEADER include/spdk/idxd.h 00:03:08.513 CC test/thread/poller_perf/poller_perf.o 00:03:08.513 TEST_HEADER include/spdk/idxd_spec.h 00:03:08.513 CC examples/util/zipf/zipf.o 00:03:08.513 TEST_HEADER include/spdk/init.h 00:03:08.513 TEST_HEADER include/spdk/ioat.h 00:03:08.513 CC examples/ioat/perf/perf.o 00:03:08.513 TEST_HEADER include/spdk/ioat_spec.h 00:03:08.513 TEST_HEADER include/spdk/iscsi_spec.h 00:03:08.513 TEST_HEADER include/spdk/json.h 00:03:08.513 TEST_HEADER include/spdk/jsonrpc.h 00:03:08.513 TEST_HEADER include/spdk/keyring.h 00:03:08.513 TEST_HEADER include/spdk/keyring_module.h 00:03:08.513 TEST_HEADER include/spdk/likely.h 00:03:08.513 TEST_HEADER include/spdk/log.h 00:03:08.771 TEST_HEADER include/spdk/lvol.h 00:03:08.771 TEST_HEADER include/spdk/memory.h 00:03:08.771 TEST_HEADER include/spdk/mmio.h 00:03:08.771 TEST_HEADER include/spdk/nbd.h 00:03:08.771 TEST_HEADER include/spdk/notify.h 00:03:08.771 TEST_HEADER include/spdk/nvme.h 00:03:08.771 TEST_HEADER include/spdk/nvme_intel.h 00:03:08.771 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:08.771 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:08.771 CC test/dma/test_dma/test_dma.o 00:03:08.771 TEST_HEADER include/spdk/nvme_spec.h 00:03:08.771 TEST_HEADER include/spdk/nvme_zns.h 00:03:08.771 CC test/app/bdev_svc/bdev_svc.o 00:03:08.771 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:08.771 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:08.771 TEST_HEADER include/spdk/nvmf.h 00:03:08.771 TEST_HEADER include/spdk/nvmf_spec.h 00:03:08.771 TEST_HEADER include/spdk/nvmf_transport.h 00:03:08.771 TEST_HEADER include/spdk/opal.h 00:03:08.771 TEST_HEADER include/spdk/opal_spec.h 00:03:08.771 TEST_HEADER include/spdk/pci_ids.h 00:03:08.771 TEST_HEADER include/spdk/pipe.h 00:03:08.771 TEST_HEADER include/spdk/queue.h 00:03:08.771 TEST_HEADER include/spdk/reduce.h 00:03:08.771 TEST_HEADER include/spdk/rpc.h 00:03:08.771 TEST_HEADER include/spdk/scheduler.h 00:03:08.771 TEST_HEADER include/spdk/scsi.h 00:03:08.771 TEST_HEADER include/spdk/scsi_spec.h 00:03:08.771 TEST_HEADER include/spdk/sock.h 00:03:08.771 TEST_HEADER include/spdk/stdinc.h 00:03:08.771 TEST_HEADER include/spdk/string.h 00:03:08.771 TEST_HEADER include/spdk/thread.h 00:03:08.771 TEST_HEADER include/spdk/trace.h 00:03:08.771 TEST_HEADER include/spdk/trace_parser.h 00:03:08.771 CC test/env/mem_callbacks/mem_callbacks.o 00:03:08.771 TEST_HEADER include/spdk/tree.h 00:03:08.771 TEST_HEADER include/spdk/ublk.h 00:03:08.771 TEST_HEADER include/spdk/util.h 00:03:08.771 TEST_HEADER include/spdk/uuid.h 00:03:08.771 TEST_HEADER include/spdk/version.h 00:03:08.771 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:08.771 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:08.771 TEST_HEADER include/spdk/vhost.h 00:03:08.771 TEST_HEADER include/spdk/vmd.h 00:03:08.771 TEST_HEADER include/spdk/xor.h 00:03:08.771 TEST_HEADER include/spdk/zipf.h 00:03:08.771 CXX test/cpp_headers/accel.o 00:03:08.771 LINK rpc_client_test 00:03:08.771 LINK poller_perf 00:03:08.771 LINK zipf 00:03:08.771 LINK interrupt_tgt 00:03:08.771 LINK bdev_svc 00:03:09.028 LINK ioat_perf 00:03:09.028 LINK spdk_trace 00:03:09.028 CXX test/cpp_headers/accel_module.o 00:03:09.028 CXX test/cpp_headers/assert.o 00:03:09.028 CC app/trace_record/trace_record.o 00:03:09.028 CC app/nvmf_tgt/nvmf_main.o 00:03:09.028 LINK test_dma 00:03:09.028 CC examples/ioat/verify/verify.o 00:03:09.286 CC test/app/histogram_perf/histogram_perf.o 00:03:09.286 CXX test/cpp_headers/barrier.o 00:03:09.286 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:09.286 CC test/app/jsoncat/jsoncat.o 00:03:09.286 CC test/app/stub/stub.o 00:03:09.286 LINK nvmf_tgt 00:03:09.286 LINK histogram_perf 00:03:09.286 CXX test/cpp_headers/base64.o 00:03:09.286 LINK mem_callbacks 00:03:09.286 LINK spdk_trace_record 00:03:09.286 LINK jsoncat 00:03:09.286 LINK verify 00:03:09.286 CC test/env/vtophys/vtophys.o 00:03:09.543 LINK stub 00:03:09.543 CXX test/cpp_headers/bdev.o 00:03:09.543 LINK vtophys 00:03:09.543 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:09.543 LINK nvme_fuzz 00:03:09.543 CC app/iscsi_tgt/iscsi_tgt.o 00:03:09.804 CC test/event/event_perf/event_perf.o 00:03:09.804 CC test/event/reactor/reactor.o 00:03:09.804 CC app/spdk_tgt/spdk_tgt.o 00:03:09.804 CXX test/cpp_headers/bdev_module.o 00:03:09.804 CC test/nvme/aer/aer.o 00:03:09.804 CC examples/thread/thread/thread_ex.o 00:03:09.804 LINK env_dpdk_post_init 00:03:09.804 LINK event_perf 00:03:09.804 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:09.804 LINK reactor 00:03:09.804 LINK iscsi_tgt 00:03:10.063 CXX test/cpp_headers/bdev_zone.o 00:03:10.063 CC examples/sock/hello_world/hello_sock.o 00:03:10.063 LINK spdk_tgt 00:03:10.063 LINK aer 00:03:10.063 CC test/env/memory/memory_ut.o 00:03:10.063 LINK thread 00:03:10.063 CC test/event/reactor_perf/reactor_perf.o 00:03:10.063 CXX test/cpp_headers/bit_array.o 00:03:10.063 CC examples/vmd/lsvmd/lsvmd.o 00:03:10.321 LINK hello_sock 00:03:10.321 CC examples/vmd/led/led.o 00:03:10.321 LINK reactor_perf 00:03:10.321 CC app/spdk_lspci/spdk_lspci.o 00:03:10.321 CC test/nvme/reset/reset.o 00:03:10.321 LINK lsvmd 00:03:10.321 CC test/nvme/sgl/sgl.o 00:03:10.321 CXX test/cpp_headers/bit_pool.o 00:03:10.321 LINK led 00:03:10.321 LINK spdk_lspci 00:03:10.580 CXX test/cpp_headers/blob_bdev.o 00:03:10.580 CC test/event/app_repeat/app_repeat.o 00:03:10.580 LINK reset 00:03:10.580 CC test/accel/dif/dif.o 00:03:10.580 LINK sgl 00:03:10.580 CC test/nvme/e2edp/nvme_dp.o 00:03:10.580 CC app/spdk_nvme_perf/perf.o 00:03:10.580 CXX test/cpp_headers/blobfs_bdev.o 00:03:10.580 LINK app_repeat 00:03:10.839 CXX test/cpp_headers/blobfs.o 00:03:10.839 CC examples/idxd/perf/perf.o 00:03:10.839 CXX test/cpp_headers/blob.o 00:03:10.839 LINK nvme_dp 00:03:11.097 CC test/event/scheduler/scheduler.o 00:03:11.097 LINK dif 00:03:11.097 CC test/blobfs/mkfs/mkfs.o 00:03:11.097 LINK idxd_perf 00:03:11.097 CXX test/cpp_headers/conf.o 00:03:11.097 CC test/lvol/esnap/esnap.o 00:03:11.097 LINK memory_ut 00:03:11.097 CC test/nvme/overhead/overhead.o 00:03:11.355 LINK mkfs 00:03:11.355 LINK scheduler 00:03:11.355 CXX test/cpp_headers/config.o 00:03:11.355 CXX test/cpp_headers/cpuset.o 00:03:11.355 CC test/nvme/err_injection/err_injection.o 00:03:11.355 CC examples/accel/perf/accel_perf.o 00:03:11.614 LINK iscsi_fuzz 00:03:11.614 CC test/env/pci/pci_ut.o 00:03:11.614 CXX test/cpp_headers/crc16.o 00:03:11.614 LINK overhead 00:03:11.614 CXX test/cpp_headers/crc32.o 00:03:11.614 LINK err_injection 00:03:11.614 LINK spdk_nvme_perf 00:03:11.614 CXX test/cpp_headers/crc64.o 00:03:11.614 CC test/bdev/bdevio/bdevio.o 00:03:11.872 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:11.872 CC test/nvme/startup/startup.o 00:03:11.872 CC test/nvme/reserve/reserve.o 00:03:11.872 CC test/nvme/simple_copy/simple_copy.o 00:03:11.872 CXX test/cpp_headers/dif.o 00:03:11.872 LINK pci_ut 00:03:11.872 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:11.872 LINK accel_perf 00:03:11.872 LINK startup 00:03:11.872 CC app/spdk_nvme_identify/identify.o 00:03:12.131 CXX test/cpp_headers/dma.o 00:03:12.131 LINK reserve 00:03:12.131 LINK simple_copy 00:03:12.131 LINK bdevio 00:03:12.131 CXX test/cpp_headers/endian.o 00:03:12.389 CXX test/cpp_headers/env_dpdk.o 00:03:12.389 CC test/nvme/connect_stress/connect_stress.o 00:03:12.389 LINK vhost_fuzz 00:03:12.389 CC app/spdk_nvme_discover/discovery_aer.o 00:03:12.389 CXX test/cpp_headers/env.o 00:03:12.389 CXX test/cpp_headers/event.o 00:03:12.389 CC test/nvme/boot_partition/boot_partition.o 00:03:12.389 CC examples/blob/cli/blobcli.o 00:03:12.389 CC examples/blob/hello_world/hello_blob.o 00:03:12.647 LINK connect_stress 00:03:12.647 CXX test/cpp_headers/fd_group.o 00:03:12.647 LINK spdk_nvme_discover 00:03:12.647 LINK boot_partition 00:03:12.647 CXX test/cpp_headers/fd.o 00:03:12.647 LINK hello_blob 00:03:12.905 LINK spdk_nvme_identify 00:03:12.905 CC examples/nvme/hello_world/hello_world.o 00:03:12.905 CC examples/nvme/reconnect/reconnect.o 00:03:12.905 CXX test/cpp_headers/file.o 00:03:12.905 CC examples/bdev/hello_world/hello_bdev.o 00:03:12.905 CC examples/bdev/bdevperf/bdevperf.o 00:03:12.905 CC test/nvme/compliance/nvme_compliance.o 00:03:12.905 LINK blobcli 00:03:12.905 LINK hello_world 00:03:12.905 CC app/spdk_top/spdk_top.o 00:03:13.165 CC test/nvme/fused_ordering/fused_ordering.o 00:03:13.165 CXX test/cpp_headers/ftl.o 00:03:13.165 LINK reconnect 00:03:13.165 LINK hello_bdev 00:03:13.165 LINK nvme_compliance 00:03:13.165 LINK fused_ordering 00:03:13.165 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:13.165 CXX test/cpp_headers/gpt_spec.o 00:03:13.424 CC examples/nvme/arbitration/arbitration.o 00:03:13.424 CC examples/nvme/hotplug/hotplug.o 00:03:13.424 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:13.424 CXX test/cpp_headers/hexlify.o 00:03:13.424 CC examples/nvme/abort/abort.o 00:03:13.424 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:13.681 CXX test/cpp_headers/histogram_data.o 00:03:13.681 LINK cmb_copy 00:03:13.681 LINK hotplug 00:03:13.681 LINK arbitration 00:03:13.681 LINK bdevperf 00:03:13.681 LINK doorbell_aers 00:03:13.681 LINK nvme_manage 00:03:13.681 CXX test/cpp_headers/idxd.o 00:03:13.937 CXX test/cpp_headers/idxd_spec.o 00:03:13.937 CC test/nvme/fdp/fdp.o 00:03:13.937 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:13.937 CXX test/cpp_headers/init.o 00:03:13.937 CC test/nvme/cuse/cuse.o 00:03:13.937 CXX test/cpp_headers/ioat.o 00:03:13.937 LINK spdk_top 00:03:14.194 CC app/vhost/vhost.o 00:03:14.194 LINK abort 00:03:14.194 LINK pmr_persistence 00:03:14.194 CC app/spdk_dd/spdk_dd.o 00:03:14.194 CXX test/cpp_headers/ioat_spec.o 00:03:14.194 CXX test/cpp_headers/iscsi_spec.o 00:03:14.194 LINK fdp 00:03:14.194 CXX test/cpp_headers/json.o 00:03:14.194 LINK vhost 00:03:14.194 CXX test/cpp_headers/jsonrpc.o 00:03:14.451 CXX test/cpp_headers/keyring.o 00:03:14.451 CXX test/cpp_headers/keyring_module.o 00:03:14.451 CC app/fio/nvme/fio_plugin.o 00:03:14.451 CXX test/cpp_headers/likely.o 00:03:14.451 CXX test/cpp_headers/log.o 00:03:14.451 CXX test/cpp_headers/lvol.o 00:03:14.708 LINK spdk_dd 00:03:14.708 CC examples/nvmf/nvmf/nvmf.o 00:03:14.708 CXX test/cpp_headers/memory.o 00:03:14.708 CC app/fio/bdev/fio_plugin.o 00:03:14.708 CXX test/cpp_headers/mmio.o 00:03:14.708 CXX test/cpp_headers/nbd.o 00:03:14.708 CXX test/cpp_headers/notify.o 00:03:14.708 CXX test/cpp_headers/nvme.o 00:03:14.708 CXX test/cpp_headers/nvme_intel.o 00:03:14.966 CXX test/cpp_headers/nvme_ocssd.o 00:03:14.966 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:14.966 CXX test/cpp_headers/nvme_spec.o 00:03:14.966 CXX test/cpp_headers/nvme_zns.o 00:03:14.966 CXX test/cpp_headers/nvmf_cmd.o 00:03:14.966 LINK spdk_nvme 00:03:14.966 LINK nvmf 00:03:15.223 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:15.223 CXX test/cpp_headers/nvmf.o 00:03:15.223 CXX test/cpp_headers/nvmf_spec.o 00:03:15.223 CXX test/cpp_headers/nvmf_transport.o 00:03:15.223 LINK spdk_bdev 00:03:15.223 CXX test/cpp_headers/opal.o 00:03:15.223 CXX test/cpp_headers/opal_spec.o 00:03:15.223 CXX test/cpp_headers/pci_ids.o 00:03:15.223 CXX test/cpp_headers/pipe.o 00:03:15.223 CXX test/cpp_headers/queue.o 00:03:15.223 CXX test/cpp_headers/reduce.o 00:03:15.223 CXX test/cpp_headers/rpc.o 00:03:15.223 CXX test/cpp_headers/scheduler.o 00:03:15.480 LINK cuse 00:03:15.480 CXX test/cpp_headers/scsi.o 00:03:15.480 CXX test/cpp_headers/scsi_spec.o 00:03:15.480 CXX test/cpp_headers/sock.o 00:03:15.480 CXX test/cpp_headers/stdinc.o 00:03:15.480 CXX test/cpp_headers/string.o 00:03:15.480 CXX test/cpp_headers/thread.o 00:03:15.480 CXX test/cpp_headers/trace.o 00:03:15.480 CXX test/cpp_headers/trace_parser.o 00:03:15.480 CXX test/cpp_headers/tree.o 00:03:15.737 CXX test/cpp_headers/ublk.o 00:03:15.737 CXX test/cpp_headers/util.o 00:03:15.737 CXX test/cpp_headers/uuid.o 00:03:15.737 CXX test/cpp_headers/version.o 00:03:15.737 CXX test/cpp_headers/vfio_user_pci.o 00:03:15.737 CXX test/cpp_headers/vfio_user_spec.o 00:03:15.737 CXX test/cpp_headers/vhost.o 00:03:15.737 CXX test/cpp_headers/vmd.o 00:03:15.737 CXX test/cpp_headers/xor.o 00:03:15.737 CXX test/cpp_headers/zipf.o 00:03:16.301 LINK esnap 00:03:16.866 00:03:16.866 real 1m5.138s 00:03:16.866 user 6m42.034s 00:03:16.866 sys 1m38.668s 00:03:16.866 12:27:42 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:16.866 12:27:42 make -- common/autotest_common.sh@10 -- $ set +x 00:03:16.866 ************************************ 00:03:16.866 END TEST make 00:03:16.866 ************************************ 00:03:16.866 12:27:42 -- common/autotest_common.sh@1142 -- $ return 0 00:03:16.866 12:27:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:16.866 12:27:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:16.866 12:27:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:16.866 12:27:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.866 12:27:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:16.866 12:27:42 -- pm/common@44 -- $ pid=5312 00:03:16.866 12:27:42 -- pm/common@50 -- $ kill -TERM 5312 00:03:16.866 12:27:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.867 12:27:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:16.867 12:27:42 -- pm/common@44 -- $ pid=5314 00:03:16.867 12:27:42 -- pm/common@50 -- $ kill -TERM 5314 00:03:17.124 12:27:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:17.124 12:27:42 -- nvmf/common.sh@7 -- # uname -s 00:03:17.124 12:27:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:17.124 12:27:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:17.124 12:27:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:17.124 12:27:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:17.124 12:27:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:17.124 12:27:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:17.124 12:27:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:17.124 12:27:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:17.124 12:27:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:17.124 12:27:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:17.124 12:27:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:03:17.124 12:27:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:03:17.124 12:27:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:17.124 12:27:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:17.124 12:27:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:17.124 12:27:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:17.124 12:27:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:17.124 12:27:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:17.124 12:27:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:17.124 12:27:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:17.124 12:27:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:17.124 12:27:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:17.124 12:27:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:17.124 12:27:42 -- paths/export.sh@5 -- # export PATH 00:03:17.124 12:27:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:17.124 12:27:42 -- nvmf/common.sh@47 -- # : 0 00:03:17.124 12:27:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:17.124 12:27:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:17.124 12:27:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:17.124 12:27:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:17.124 12:27:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:17.124 12:27:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:17.124 12:27:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:17.124 12:27:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:17.124 12:27:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:17.124 12:27:42 -- spdk/autotest.sh@32 -- # uname -s 00:03:17.124 12:27:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:17.124 12:27:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:17.124 12:27:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:17.124 12:27:43 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:17.124 12:27:43 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:17.124 12:27:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:17.124 12:27:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:17.124 12:27:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:17.124 12:27:43 -- spdk/autotest.sh@48 -- # udevadm_pid=52935 00:03:17.124 12:27:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:17.124 12:27:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:17.124 12:27:43 -- pm/common@17 -- # local monitor 00:03:17.124 12:27:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.125 12:27:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:17.125 12:27:43 -- pm/common@25 -- # sleep 1 00:03:17.125 12:27:43 -- pm/common@21 -- # date +%s 00:03:17.125 12:27:43 -- pm/common@21 -- # date +%s 00:03:17.125 12:27:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720787263 00:03:17.125 12:27:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720787263 00:03:17.125 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720787263_collect-vmstat.pm.log 00:03:17.125 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720787263_collect-cpu-load.pm.log 00:03:18.056 12:27:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:18.056 12:27:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:18.056 12:27:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:18.056 12:27:44 -- common/autotest_common.sh@10 -- # set +x 00:03:18.056 12:27:44 -- spdk/autotest.sh@59 -- # create_test_list 00:03:18.056 12:27:44 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:18.056 12:27:44 -- common/autotest_common.sh@10 -- # set +x 00:03:18.314 12:27:44 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:18.314 12:27:44 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:18.314 12:27:44 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:18.314 12:27:44 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:18.315 12:27:44 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:18.315 12:27:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:18.315 12:27:44 -- common/autotest_common.sh@1455 -- # uname 00:03:18.315 12:27:44 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:18.315 12:27:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:18.315 12:27:44 -- common/autotest_common.sh@1475 -- # uname 00:03:18.315 12:27:44 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:18.315 12:27:44 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:18.315 12:27:44 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:18.315 12:27:44 -- spdk/autotest.sh@72 -- # hash lcov 00:03:18.315 12:27:44 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:18.315 12:27:44 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:18.315 --rc lcov_branch_coverage=1 00:03:18.315 --rc lcov_function_coverage=1 00:03:18.315 --rc genhtml_branch_coverage=1 00:03:18.315 --rc genhtml_function_coverage=1 00:03:18.315 --rc genhtml_legend=1 00:03:18.315 --rc geninfo_all_blocks=1 00:03:18.315 ' 00:03:18.315 12:27:44 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:18.315 --rc lcov_branch_coverage=1 00:03:18.315 --rc lcov_function_coverage=1 00:03:18.315 --rc genhtml_branch_coverage=1 00:03:18.315 --rc genhtml_function_coverage=1 00:03:18.315 --rc genhtml_legend=1 00:03:18.315 --rc geninfo_all_blocks=1 00:03:18.315 ' 00:03:18.315 12:27:44 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:18.315 --rc lcov_branch_coverage=1 00:03:18.315 --rc lcov_function_coverage=1 00:03:18.315 --rc genhtml_branch_coverage=1 00:03:18.315 --rc genhtml_function_coverage=1 00:03:18.315 --rc genhtml_legend=1 00:03:18.315 --rc geninfo_all_blocks=1 00:03:18.315 --no-external' 00:03:18.315 12:27:44 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:18.315 --rc lcov_branch_coverage=1 00:03:18.315 --rc lcov_function_coverage=1 00:03:18.315 --rc genhtml_branch_coverage=1 00:03:18.315 --rc genhtml_function_coverage=1 00:03:18.315 --rc genhtml_legend=1 00:03:18.315 --rc geninfo_all_blocks=1 00:03:18.315 --no-external' 00:03:18.315 12:27:44 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:18.315 lcov: LCOV version 1.14 00:03:18.315 12:27:44 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:33.211 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:33.211 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:48.083 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:48.083 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:48.084 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:48.084 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:49.982 12:28:15 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:49.982 12:28:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:49.982 12:28:15 -- common/autotest_common.sh@10 -- # set +x 00:03:49.982 12:28:15 -- spdk/autotest.sh@91 -- # rm -f 00:03:49.982 12:28:15 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.547 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.547 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:50.547 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:50.547 12:28:16 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:50.547 12:28:16 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:50.547 12:28:16 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:50.547 12:28:16 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:50.547 12:28:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.547 12:28:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:50.547 12:28:16 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:50.547 12:28:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:50.547 12:28:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.548 12:28:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.548 12:28:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:50.548 12:28:16 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:50.548 12:28:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:50.548 12:28:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.548 12:28:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.548 12:28:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:50.548 12:28:16 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:50.548 12:28:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:50.548 12:28:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.548 12:28:16 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:50.548 12:28:16 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:50.548 12:28:16 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:50.548 12:28:16 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:50.548 12:28:16 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:50.548 12:28:16 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:50.548 12:28:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:50.548 12:28:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:50.548 12:28:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:50.548 12:28:16 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:50.548 12:28:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:50.548 No valid GPT data, bailing 00:03:50.548 12:28:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:50.548 12:28:16 -- scripts/common.sh@391 -- # pt= 00:03:50.548 12:28:16 -- scripts/common.sh@392 -- # return 1 00:03:50.548 12:28:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:50.548 1+0 records in 00:03:50.548 1+0 records out 00:03:50.548 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00489375 s, 214 MB/s 00:03:50.548 12:28:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:50.548 12:28:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:50.548 12:28:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:50.548 12:28:16 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:50.548 12:28:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:50.805 No valid GPT data, bailing 00:03:50.805 12:28:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:50.805 12:28:16 -- scripts/common.sh@391 -- # pt= 00:03:50.805 12:28:16 -- scripts/common.sh@392 -- # return 1 00:03:50.805 12:28:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:50.805 1+0 records in 00:03:50.805 1+0 records out 00:03:50.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00544085 s, 193 MB/s 00:03:50.805 12:28:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:50.805 12:28:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:50.805 12:28:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:50.805 12:28:16 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:50.805 12:28:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:50.805 No valid GPT data, bailing 00:03:50.805 12:28:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:50.805 12:28:16 -- scripts/common.sh@391 -- # pt= 00:03:50.805 12:28:16 -- scripts/common.sh@392 -- # return 1 00:03:50.805 12:28:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:50.805 1+0 records in 00:03:50.805 1+0 records out 00:03:50.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454137 s, 231 MB/s 00:03:50.805 12:28:16 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:50.805 12:28:16 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:50.805 12:28:16 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:50.805 12:28:16 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:50.805 12:28:16 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:50.805 No valid GPT data, bailing 00:03:50.805 12:28:16 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:50.805 12:28:16 -- scripts/common.sh@391 -- # pt= 00:03:50.805 12:28:16 -- scripts/common.sh@392 -- # return 1 00:03:50.805 12:28:16 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:50.805 1+0 records in 00:03:50.805 1+0 records out 00:03:50.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00525356 s, 200 MB/s 00:03:50.805 12:28:16 -- spdk/autotest.sh@118 -- # sync 00:03:51.063 12:28:16 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:51.063 12:28:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:51.063 12:28:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:52.962 12:28:18 -- spdk/autotest.sh@124 -- # uname -s 00:03:52.962 12:28:18 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:52.962 12:28:18 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:52.962 12:28:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.962 12:28:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.962 12:28:18 -- common/autotest_common.sh@10 -- # set +x 00:03:52.962 ************************************ 00:03:52.962 START TEST setup.sh 00:03:52.962 ************************************ 00:03:52.962 12:28:18 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:52.962 * Looking for test storage... 00:03:52.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:52.962 12:28:18 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:52.962 12:28:18 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:52.962 12:28:18 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:52.962 12:28:18 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.962 12:28:18 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.962 12:28:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.962 ************************************ 00:03:52.962 START TEST acl 00:03:52.962 ************************************ 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:52.962 * Looking for test storage... 00:03:52.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:52.962 12:28:18 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:52.962 12:28:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:52.962 12:28:18 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:52.962 12:28:18 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:52.962 12:28:18 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:52.962 12:28:18 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:52.962 12:28:18 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:52.962 12:28:18 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.962 12:28:18 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.900 12:28:19 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:53.900 12:28:19 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:53.900 12:28:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:53.900 12:28:19 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:53.900 12:28:19 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.900 12:28:19 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.465 Hugepages 00:03:54.465 node hugesize free / total 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.465 00:03:54.465 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:54.465 12:28:20 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:54.465 12:28:20 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.465 12:28:20 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.465 12:28:20 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:54.465 ************************************ 00:03:54.465 START TEST denied 00:03:54.465 ************************************ 00:03:54.465 12:28:20 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:54.465 12:28:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:54.465 12:28:20 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:54.465 12:28:20 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.465 12:28:20 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:54.465 12:28:20 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:55.417 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:55.417 12:28:21 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:55.417 12:28:21 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:55.417 12:28:21 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:55.417 12:28:21 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:55.417 12:28:21 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:55.417 12:28:21 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:55.417 12:28:21 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:55.417 12:28:21 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:55.417 12:28:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:55.417 12:28:21 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:56.352 00:03:56.352 real 0m1.530s 00:03:56.352 user 0m0.587s 00:03:56.352 sys 0m0.870s 00:03:56.352 12:28:22 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.352 12:28:22 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:56.352 ************************************ 00:03:56.352 END TEST denied 00:03:56.352 ************************************ 00:03:56.352 12:28:22 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:56.352 12:28:22 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:56.352 12:28:22 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.352 12:28:22 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.352 12:28:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:56.352 ************************************ 00:03:56.352 START TEST allowed 00:03:56.352 ************************************ 00:03:56.352 12:28:22 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:56.352 12:28:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:56.352 12:28:22 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:56.352 12:28:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.352 12:28:22 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:56.352 12:28:22 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:56.915 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:56.915 12:28:22 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:56.915 12:28:22 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:56.915 12:28:22 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:56.915 12:28:22 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:56.915 12:28:22 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:56.915 12:28:22 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:56.915 12:28:22 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:56.915 12:28:22 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:56.915 12:28:22 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.915 12:28:22 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:57.847 00:03:57.847 real 0m1.498s 00:03:57.847 user 0m0.649s 00:03:57.847 sys 0m0.837s 00:03:57.847 12:28:23 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.847 12:28:23 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:57.847 ************************************ 00:03:57.847 END TEST allowed 00:03:57.847 ************************************ 00:03:57.847 12:28:23 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:57.847 00:03:57.847 real 0m4.850s 00:03:57.847 user 0m2.091s 00:03:57.847 sys 0m2.680s 00:03:57.847 12:28:23 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.847 12:28:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:57.847 ************************************ 00:03:57.847 END TEST acl 00:03:57.847 ************************************ 00:03:57.847 12:28:23 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:57.847 12:28:23 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:57.847 12:28:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.848 12:28:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.848 12:28:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:57.848 ************************************ 00:03:57.848 START TEST hugepages 00:03:57.848 ************************************ 00:03:57.848 12:28:23 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:57.848 * Looking for test storage... 00:03:57.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6044808 kB' 'MemAvailable: 7425320 kB' 'Buffers: 2436 kB' 'Cached: 1594716 kB' 'SwapCached: 0 kB' 'Active: 436084 kB' 'Inactive: 1265804 kB' 'Active(anon): 115224 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265804 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 106388 kB' 'Mapped: 48828 kB' 'Shmem: 10488 kB' 'KReclaimable: 61572 kB' 'Slab: 137480 kB' 'SReclaimable: 61572 kB' 'SUnreclaim: 75908 kB' 'KernelStack: 6396 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 335668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.848 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:57.849 12:28:23 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:57.849 12:28:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.849 12:28:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.849 12:28:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.849 ************************************ 00:03:57.849 START TEST default_setup 00:03:57.849 ************************************ 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.849 12:28:23 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:58.414 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.673 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.673 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8128448 kB' 'MemAvailable: 9508868 kB' 'Buffers: 2436 kB' 'Cached: 1594704 kB' 'SwapCached: 0 kB' 'Active: 453332 kB' 'Inactive: 1265812 kB' 'Active(anon): 132472 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265812 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123580 kB' 'Mapped: 48752 kB' 'Shmem: 10464 kB' 'KReclaimable: 61376 kB' 'Slab: 137292 kB' 'SReclaimable: 61376 kB' 'SUnreclaim: 75916 kB' 'KernelStack: 6352 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.673 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8128448 kB' 'MemAvailable: 9508868 kB' 'Buffers: 2436 kB' 'Cached: 1594704 kB' 'SwapCached: 0 kB' 'Active: 452980 kB' 'Inactive: 1265812 kB' 'Active(anon): 132120 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265812 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123228 kB' 'Mapped: 48752 kB' 'Shmem: 10464 kB' 'KReclaimable: 61376 kB' 'Slab: 137288 kB' 'SReclaimable: 61376 kB' 'SUnreclaim: 75912 kB' 'KernelStack: 6320 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.936 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.936 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.936 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.936 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.936 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8128448 kB' 'MemAvailable: 9508872 kB' 'Buffers: 2436 kB' 'Cached: 1594704 kB' 'SwapCached: 0 kB' 'Active: 453024 kB' 'Inactive: 1265816 kB' 'Active(anon): 132164 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265816 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123244 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61376 kB' 'Slab: 137272 kB' 'SReclaimable: 61376 kB' 'SUnreclaim: 75896 kB' 'KernelStack: 6304 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.937 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.938 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:58.939 nr_hugepages=1024 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.939 resv_hugepages=0 00:03:58.939 surplus_hugepages=0 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.939 anon_hugepages=0 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8128200 kB' 'MemAvailable: 9508544 kB' 'Buffers: 2436 kB' 'Cached: 1594708 kB' 'SwapCached: 0 kB' 'Active: 452384 kB' 'Inactive: 1265820 kB' 'Active(anon): 131524 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122636 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61208 kB' 'Slab: 137008 kB' 'SReclaimable: 61208 kB' 'SUnreclaim: 75800 kB' 'KernelStack: 6336 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.939 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.940 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8128200 kB' 'MemUsed: 4113776 kB' 'SwapCached: 0 kB' 'Active: 452664 kB' 'Inactive: 1265820 kB' 'Active(anon): 131804 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1597144 kB' 'Mapped: 48628 kB' 'AnonPages: 122916 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61208 kB' 'Slab: 137008 kB' 'SReclaimable: 61208 kB' 'SUnreclaim: 75800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.941 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.942 node0=1024 expecting 1024 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:58.942 00:03:58.942 real 0m0.990s 00:03:58.942 user 0m0.456s 00:03:58.942 sys 0m0.466s 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.942 12:28:24 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:58.942 ************************************ 00:03:58.942 END TEST default_setup 00:03:58.942 ************************************ 00:03:58.942 12:28:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:58.942 12:28:24 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:58.942 12:28:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.942 12:28:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.942 12:28:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.942 ************************************ 00:03:58.942 START TEST per_node_1G_alloc 00:03:58.942 ************************************ 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.942 12:28:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:59.200 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.200 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:59.200 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:59.200 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:59.200 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:59.200 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.200 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.200 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.200 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.200 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9175204 kB' 'MemAvailable: 10555548 kB' 'Buffers: 2436 kB' 'Cached: 1594708 kB' 'SwapCached: 0 kB' 'Active: 452968 kB' 'Inactive: 1265820 kB' 'Active(anon): 132108 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123216 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136968 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75764 kB' 'KernelStack: 6292 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.201 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.202 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.202 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.463 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9177088 kB' 'MemAvailable: 10557432 kB' 'Buffers: 2436 kB' 'Cached: 1594708 kB' 'SwapCached: 0 kB' 'Active: 452704 kB' 'Inactive: 1265820 kB' 'Active(anon): 131844 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122960 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136996 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75792 kB' 'KernelStack: 6352 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.464 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9177176 kB' 'MemAvailable: 10557520 kB' 'Buffers: 2436 kB' 'Cached: 1594708 kB' 'SwapCached: 0 kB' 'Active: 452352 kB' 'Inactive: 1265820 kB' 'Active(anon): 131492 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122652 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136996 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75792 kB' 'KernelStack: 6336 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.465 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.466 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.467 nr_hugepages=512 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:59.467 resv_hugepages=0 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.467 surplus_hugepages=0 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.467 anon_hugepages=0 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9177176 kB' 'MemAvailable: 10557520 kB' 'Buffers: 2436 kB' 'Cached: 1594708 kB' 'SwapCached: 0 kB' 'Active: 452556 kB' 'Inactive: 1265820 kB' 'Active(anon): 131696 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122856 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136988 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75784 kB' 'KernelStack: 6320 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.467 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.468 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9177176 kB' 'MemUsed: 3064800 kB' 'SwapCached: 0 kB' 'Active: 452484 kB' 'Inactive: 1265820 kB' 'Active(anon): 131624 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1597144 kB' 'Mapped: 48628 kB' 'AnonPages: 122736 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61204 kB' 'Slab: 136988 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.469 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.470 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.471 node0=512 expecting 512 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:59.471 00:03:59.471 real 0m0.502s 00:03:59.471 user 0m0.231s 00:03:59.471 sys 0m0.304s 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.471 12:28:25 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.471 ************************************ 00:03:59.471 END TEST per_node_1G_alloc 00:03:59.471 ************************************ 00:03:59.471 12:28:25 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:59.471 12:28:25 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:59.471 12:28:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.471 12:28:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.471 12:28:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.471 ************************************ 00:03:59.471 START TEST even_2G_alloc 00:03:59.471 ************************************ 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.471 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:59.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:59.729 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:59.729 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:59.729 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:59.729 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.729 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.729 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.729 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.729 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.729 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.729 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.729 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.729 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.729 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.729 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.729 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.995 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.995 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.995 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.995 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.995 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.995 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.995 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.995 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127824 kB' 'MemAvailable: 9508168 kB' 'Buffers: 2436 kB' 'Cached: 1594708 kB' 'SwapCached: 0 kB' 'Active: 452832 kB' 'Inactive: 1265820 kB' 'Active(anon): 131972 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123028 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136960 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75756 kB' 'KernelStack: 6312 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:03:59.995 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.995 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.995 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.995 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.995 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.995 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.995 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.996 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.997 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127824 kB' 'MemAvailable: 9508168 kB' 'Buffers: 2436 kB' 'Cached: 1594708 kB' 'SwapCached: 0 kB' 'Active: 452468 kB' 'Inactive: 1265820 kB' 'Active(anon): 131608 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122660 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136960 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75756 kB' 'KernelStack: 6340 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.998 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.999 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127824 kB' 'MemAvailable: 9508168 kB' 'Buffers: 2436 kB' 'Cached: 1594708 kB' 'SwapCached: 0 kB' 'Active: 452468 kB' 'Inactive: 1265820 kB' 'Active(anon): 131608 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122920 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136960 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75756 kB' 'KernelStack: 6340 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.000 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.001 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.002 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.003 nr_hugepages=1024 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.003 resv_hugepages=0 00:04:00.003 surplus_hugepages=0 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.003 anon_hugepages=0 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127824 kB' 'MemAvailable: 9508168 kB' 'Buffers: 2436 kB' 'Cached: 1594708 kB' 'SwapCached: 0 kB' 'Active: 452468 kB' 'Inactive: 1265820 kB' 'Active(anon): 131608 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123008 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136960 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75756 kB' 'KernelStack: 6356 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.003 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.004 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.005 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8128344 kB' 'MemUsed: 4113632 kB' 'SwapCached: 0 kB' 'Active: 452568 kB' 'Inactive: 1265820 kB' 'Active(anon): 131708 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1597144 kB' 'Mapped: 48668 kB' 'AnonPages: 122960 kB' 'Shmem: 10464 kB' 'KernelStack: 6372 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61204 kB' 'Slab: 136960 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.006 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.007 node0=1024 expecting 1024 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:00.007 00:04:00.007 real 0m0.537s 00:04:00.007 user 0m0.293s 00:04:00.007 sys 0m0.274s 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.007 12:28:25 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.007 ************************************ 00:04:00.007 END TEST even_2G_alloc 00:04:00.008 ************************************ 00:04:00.008 12:28:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:00.008 12:28:26 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:00.008 12:28:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.008 12:28:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.008 12:28:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.008 ************************************ 00:04:00.008 START TEST odd_alloc 00:04:00.008 ************************************ 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.008 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.301 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.301 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.301 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.563 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8124164 kB' 'MemAvailable: 9504508 kB' 'Buffers: 2436 kB' 'Cached: 1594708 kB' 'SwapCached: 0 kB' 'Active: 452864 kB' 'Inactive: 1265820 kB' 'Active(anon): 132004 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123140 kB' 'Mapped: 48752 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136996 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75792 kB' 'KernelStack: 6324 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.564 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8124164 kB' 'MemAvailable: 9504508 kB' 'Buffers: 2436 kB' 'Cached: 1594708 kB' 'SwapCached: 0 kB' 'Active: 452656 kB' 'Inactive: 1265820 kB' 'Active(anon): 131796 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122700 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136996 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75792 kB' 'KernelStack: 6352 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.565 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.566 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8124164 kB' 'MemAvailable: 9504508 kB' 'Buffers: 2436 kB' 'Cached: 1594708 kB' 'SwapCached: 0 kB' 'Active: 452404 kB' 'Inactive: 1265820 kB' 'Active(anon): 131544 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122960 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136996 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75792 kB' 'KernelStack: 6352 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.567 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.568 nr_hugepages=1025 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:00.568 resv_hugepages=0 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.568 surplus_hugepages=0 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.568 anon_hugepages=0 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:00.568 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8124416 kB' 'MemAvailable: 9504760 kB' 'Buffers: 2436 kB' 'Cached: 1594708 kB' 'SwapCached: 0 kB' 'Active: 452616 kB' 'Inactive: 1265820 kB' 'Active(anon): 131756 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122972 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136996 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75792 kB' 'KernelStack: 6384 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 355548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.569 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8124668 kB' 'MemUsed: 4117308 kB' 'SwapCached: 0 kB' 'Active: 452460 kB' 'Inactive: 1265820 kB' 'Active(anon): 131600 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1597144 kB' 'Mapped: 48628 kB' 'AnonPages: 122772 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61204 kB' 'Slab: 136988 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.570 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.571 node0=1025 expecting 1025 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:00.571 12:28:26 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:00.571 00:04:00.571 real 0m0.515s 00:04:00.571 user 0m0.252s 00:04:00.571 sys 0m0.297s 00:04:00.572 12:28:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.572 12:28:26 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.572 ************************************ 00:04:00.572 END TEST odd_alloc 00:04:00.572 ************************************ 00:04:00.572 12:28:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:00.572 12:28:26 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:00.572 12:28:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.572 12:28:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.572 12:28:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.572 ************************************ 00:04:00.572 START TEST custom_alloc 00:04:00.572 ************************************ 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.572 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.144 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.144 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9173540 kB' 'MemAvailable: 10553888 kB' 'Buffers: 2436 kB' 'Cached: 1594712 kB' 'SwapCached: 0 kB' 'Active: 453024 kB' 'Inactive: 1265824 kB' 'Active(anon): 132164 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123332 kB' 'Mapped: 48668 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136992 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75788 kB' 'KernelStack: 6388 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 353248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.144 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9173792 kB' 'MemAvailable: 10554136 kB' 'Buffers: 2436 kB' 'Cached: 1594708 kB' 'SwapCached: 0 kB' 'Active: 452844 kB' 'Inactive: 1265820 kB' 'Active(anon): 131984 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265820 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122924 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136980 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75776 kB' 'KernelStack: 6308 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.145 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.146 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9173792 kB' 'MemAvailable: 10554140 kB' 'Buffers: 2436 kB' 'Cached: 1594712 kB' 'SwapCached: 0 kB' 'Active: 453000 kB' 'Inactive: 1265824 kB' 'Active(anon): 132140 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 123288 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136948 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75744 kB' 'KernelStack: 6324 kB' 'PageTables: 4200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.147 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.148 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.149 nr_hugepages=512 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:01.149 resv_hugepages=0 00:04:01.149 surplus_hugepages=0 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.149 anon_hugepages=0 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9173792 kB' 'MemAvailable: 10554140 kB' 'Buffers: 2436 kB' 'Cached: 1594712 kB' 'SwapCached: 0 kB' 'Active: 452472 kB' 'Inactive: 1265824 kB' 'Active(anon): 131612 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122792 kB' 'Mapped: 48656 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136944 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75740 kB' 'KernelStack: 6308 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.149 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.150 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 9173792 kB' 'MemUsed: 3068184 kB' 'SwapCached: 0 kB' 'Active: 452472 kB' 'Inactive: 1265824 kB' 'Active(anon): 131612 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1597148 kB' 'Mapped: 48656 kB' 'AnonPages: 122792 kB' 'Shmem: 10464 kB' 'KernelStack: 6376 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61204 kB' 'Slab: 136944 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.151 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.152 node0=512 expecting 512 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:01.152 00:04:01.152 real 0m0.488s 00:04:01.152 user 0m0.235s 00:04:01.152 sys 0m0.286s 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.152 ************************************ 00:04:01.152 END TEST custom_alloc 00:04:01.152 ************************************ 00:04:01.152 12:28:27 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.152 12:28:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:01.152 12:28:27 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:01.152 12:28:27 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.152 12:28:27 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.152 12:28:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.152 ************************************ 00:04:01.152 START TEST no_shrink_alloc 00:04:01.152 ************************************ 00:04:01.152 12:28:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:01.152 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:01.152 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.152 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.152 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:01.152 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.152 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.152 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.152 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.152 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.152 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.153 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.153 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.153 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:01.153 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.153 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.153 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.153 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.153 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:01.153 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:01.153 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:01.153 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.153 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.410 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.410 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8126876 kB' 'MemAvailable: 9507224 kB' 'Buffers: 2436 kB' 'Cached: 1594712 kB' 'SwapCached: 0 kB' 'Active: 452836 kB' 'Inactive: 1265824 kB' 'Active(anon): 131976 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 123176 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136976 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75772 kB' 'KernelStack: 6340 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.673 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8126876 kB' 'MemAvailable: 9507224 kB' 'Buffers: 2436 kB' 'Cached: 1594712 kB' 'SwapCached: 0 kB' 'Active: 452420 kB' 'Inactive: 1265824 kB' 'Active(anon): 131560 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122712 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136944 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75740 kB' 'KernelStack: 6336 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.674 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.675 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8126876 kB' 'MemAvailable: 9507224 kB' 'Buffers: 2436 kB' 'Cached: 1594712 kB' 'SwapCached: 0 kB' 'Active: 452408 kB' 'Inactive: 1265824 kB' 'Active(anon): 131548 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122756 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136944 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75740 kB' 'KernelStack: 6320 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.678 nr_hugepages=1024 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:01.678 resv_hugepages=0 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.678 surplus_hugepages=0 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.678 anon_hugepages=0 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8126876 kB' 'MemAvailable: 9507224 kB' 'Buffers: 2436 kB' 'Cached: 1594712 kB' 'SwapCached: 0 kB' 'Active: 452436 kB' 'Inactive: 1265824 kB' 'Active(anon): 131576 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122760 kB' 'Mapped: 48628 kB' 'Shmem: 10464 kB' 'KReclaimable: 61204 kB' 'Slab: 136944 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75740 kB' 'KernelStack: 6352 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 352492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.679 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8126876 kB' 'MemUsed: 4115100 kB' 'SwapCached: 0 kB' 'Active: 452648 kB' 'Inactive: 1265824 kB' 'Active(anon): 131788 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1597148 kB' 'Mapped: 48628 kB' 'AnonPages: 122972 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61204 kB' 'Slab: 136944 kB' 'SReclaimable: 61204 kB' 'SUnreclaim: 75740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.680 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.681 node0=1024 expecting 1024 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:01.681 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:01.682 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:01.682 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:01.682 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:01.682 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.682 12:28:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.939 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.939 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.939 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.203 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127020 kB' 'MemAvailable: 9507364 kB' 'Buffers: 2436 kB' 'Cached: 1594712 kB' 'SwapCached: 0 kB' 'Active: 448384 kB' 'Inactive: 1265824 kB' 'Active(anon): 127524 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118460 kB' 'Mapped: 48016 kB' 'Shmem: 10464 kB' 'KReclaimable: 61200 kB' 'Slab: 136732 kB' 'SReclaimable: 61200 kB' 'SUnreclaim: 75532 kB' 'KernelStack: 6244 kB' 'PageTables: 3836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.204 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127020 kB' 'MemAvailable: 9507364 kB' 'Buffers: 2436 kB' 'Cached: 1594712 kB' 'SwapCached: 0 kB' 'Active: 447768 kB' 'Inactive: 1265824 kB' 'Active(anon): 126908 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117796 kB' 'Mapped: 48016 kB' 'Shmem: 10464 kB' 'KReclaimable: 61200 kB' 'Slab: 136732 kB' 'SReclaimable: 61200 kB' 'SUnreclaim: 75532 kB' 'KernelStack: 6196 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.206 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127020 kB' 'MemAvailable: 9507364 kB' 'Buffers: 2436 kB' 'Cached: 1594712 kB' 'SwapCached: 0 kB' 'Active: 447768 kB' 'Inactive: 1265824 kB' 'Active(anon): 126908 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117796 kB' 'Mapped: 48016 kB' 'Shmem: 10464 kB' 'KReclaimable: 61200 kB' 'Slab: 136732 kB' 'SReclaimable: 61200 kB' 'SUnreclaim: 75532 kB' 'KernelStack: 6264 kB' 'PageTables: 3672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.208 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.209 nr_hugepages=1024 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.209 resv_hugepages=0 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.209 surplus_hugepages=0 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.209 anon_hugepages=0 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127020 kB' 'MemAvailable: 9507364 kB' 'Buffers: 2436 kB' 'Cached: 1594712 kB' 'SwapCached: 0 kB' 'Active: 447512 kB' 'Inactive: 1265824 kB' 'Active(anon): 126652 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 117852 kB' 'Mapped: 47888 kB' 'Shmem: 10464 kB' 'KReclaimable: 61200 kB' 'Slab: 136732 kB' 'SReclaimable: 61200 kB' 'SUnreclaim: 75532 kB' 'KernelStack: 6240 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.209 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.210 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 8127020 kB' 'MemUsed: 4114956 kB' 'SwapCached: 0 kB' 'Active: 447480 kB' 'Inactive: 1265824 kB' 'Active(anon): 126620 kB' 'Inactive(anon): 0 kB' 'Active(file): 320860 kB' 'Inactive(file): 1265824 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1597148 kB' 'Mapped: 47888 kB' 'AnonPages: 117820 kB' 'Shmem: 10464 kB' 'KernelStack: 6224 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61200 kB' 'Slab: 136732 kB' 'SReclaimable: 61200 kB' 'SUnreclaim: 75532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.211 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.212 node0=1024 expecting 1024 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.212 00:04:02.212 real 0m1.028s 00:04:02.212 user 0m0.523s 00:04:02.212 sys 0m0.541s 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.212 12:28:28 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:02.212 ************************************ 00:04:02.212 END TEST no_shrink_alloc 00:04:02.212 ************************************ 00:04:02.212 12:28:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:02.212 12:28:28 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:02.212 12:28:28 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:02.212 12:28:28 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:02.212 12:28:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.212 12:28:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.212 12:28:28 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:02.212 12:28:28 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:02.212 12:28:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:02.212 12:28:28 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:02.212 00:04:02.212 real 0m4.497s 00:04:02.212 user 0m2.159s 00:04:02.212 sys 0m2.426s 00:04:02.212 12:28:28 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.212 ************************************ 00:04:02.212 12:28:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:02.212 END TEST hugepages 00:04:02.212 ************************************ 00:04:02.212 12:28:28 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:02.212 12:28:28 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:02.212 12:28:28 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.212 12:28:28 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.212 12:28:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:02.212 ************************************ 00:04:02.212 START TEST driver 00:04:02.212 ************************************ 00:04:02.212 12:28:28 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:02.470 * Looking for test storage... 00:04:02.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:02.470 12:28:28 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:02.470 12:28:28 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:02.470 12:28:28 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.078 12:28:28 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:03.078 12:28:28 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.078 12:28:28 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.078 12:28:28 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:03.078 ************************************ 00:04:03.078 START TEST guess_driver 00:04:03.078 ************************************ 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:03.078 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:03.078 Looking for driver=uio_pci_generic 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.078 12:28:28 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:03.653 12:28:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:03.653 12:28:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:03.653 12:28:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.653 12:28:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.653 12:28:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:03.653 12:28:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.653 12:28:29 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.653 12:28:29 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:03.653 12:28:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.911 12:28:29 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:03.911 12:28:29 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:03.911 12:28:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.911 12:28:29 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:04.477 00:04:04.477 real 0m1.416s 00:04:04.477 user 0m0.540s 00:04:04.477 sys 0m0.843s 00:04:04.477 12:28:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.477 12:28:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:04.477 ************************************ 00:04:04.477 END TEST guess_driver 00:04:04.477 ************************************ 00:04:04.477 12:28:30 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:04.477 00:04:04.477 real 0m2.100s 00:04:04.477 user 0m0.738s 00:04:04.477 sys 0m1.375s 00:04:04.477 12:28:30 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.477 12:28:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:04.477 ************************************ 00:04:04.477 END TEST driver 00:04:04.477 ************************************ 00:04:04.477 12:28:30 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:04.477 12:28:30 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:04.477 12:28:30 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.477 12:28:30 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.477 12:28:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:04.477 ************************************ 00:04:04.477 START TEST devices 00:04:04.477 ************************************ 00:04:04.477 12:28:30 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:04.477 * Looking for test storage... 00:04:04.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:04.477 12:28:30 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:04.477 12:28:30 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:04.477 12:28:30 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:04.477 12:28:30 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:05.414 12:28:31 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:05.414 No valid GPT data, bailing 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:05.414 12:28:31 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:05.414 12:28:31 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:05.414 12:28:31 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:04:05.414 No valid GPT data, bailing 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:04:05.414 12:28:31 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:04:05.414 12:28:31 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:04:05.414 12:28:31 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:04:05.414 No valid GPT data, bailing 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:04:05.414 12:28:31 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:04:05.414 12:28:31 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:04:05.414 12:28:31 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:05.414 12:28:31 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:05.414 12:28:31 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:05.414 No valid GPT data, bailing 00:04:05.673 12:28:31 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:05.673 12:28:31 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:05.673 12:28:31 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:05.673 12:28:31 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:05.673 12:28:31 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:05.673 12:28:31 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:05.673 12:28:31 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:05.673 12:28:31 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:05.673 12:28:31 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:05.673 12:28:31 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:05.673 12:28:31 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:04:05.673 12:28:31 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:05.673 12:28:31 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:05.673 12:28:31 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.673 12:28:31 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.673 12:28:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:05.673 ************************************ 00:04:05.673 START TEST nvme_mount 00:04:05.673 ************************************ 00:04:05.673 12:28:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:05.673 12:28:31 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:05.673 12:28:31 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:05.673 12:28:31 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:05.673 12:28:31 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:05.673 12:28:31 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:05.673 12:28:31 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:05.673 12:28:31 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:05.673 12:28:31 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:05.674 12:28:31 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:05.674 12:28:31 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:05.674 12:28:31 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:05.674 12:28:31 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:05.674 12:28:31 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:05.674 12:28:31 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:05.674 12:28:31 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:05.674 12:28:31 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:05.674 12:28:31 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:05.674 12:28:31 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:05.674 12:28:31 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:06.612 Creating new GPT entries in memory. 00:04:06.612 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:06.612 other utilities. 00:04:06.612 12:28:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:06.612 12:28:32 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.612 12:28:32 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:06.612 12:28:32 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:06.612 12:28:32 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:07.579 Creating new GPT entries in memory. 00:04:07.579 The operation has completed successfully. 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 57125 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.579 12:28:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:07.837 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:07.837 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:07.837 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:07.837 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.837 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:07.837 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.096 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:08.096 12:28:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.096 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:08.096 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.096 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.096 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:08.096 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.096 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:08.096 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:08.096 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:08.096 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.096 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.096 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.096 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:08.096 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:08.096 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:08.096 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:08.354 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:08.354 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:08.354 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:08.354 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:08.354 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:08.354 12:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:08.354 12:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.354 12:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:08.354 12:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:08.612 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.870 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:08.870 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.870 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:08.870 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.870 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:08.870 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:08.870 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.870 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:08.870 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:08.871 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.871 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:08.871 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:08.871 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:08.871 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:08.871 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:08.871 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:08.871 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:08.871 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:08.871 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.871 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:08.871 12:28:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:08.871 12:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.871 12:28:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:09.438 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.438 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:09.438 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:09.438 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.438 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.438 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.438 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.438 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.438 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:09.438 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.438 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.438 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:09.438 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:09.438 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:09.438 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:09.696 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.696 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:09.696 12:28:35 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:09.696 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:09.696 00:04:09.696 real 0m4.014s 00:04:09.696 user 0m0.715s 00:04:09.696 sys 0m1.034s 00:04:09.696 12:28:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.696 12:28:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:09.696 ************************************ 00:04:09.696 END TEST nvme_mount 00:04:09.696 ************************************ 00:04:09.696 12:28:35 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:09.696 12:28:35 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:09.696 12:28:35 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.696 12:28:35 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.696 12:28:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:09.696 ************************************ 00:04:09.696 START TEST dm_mount 00:04:09.696 ************************************ 00:04:09.696 12:28:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:09.697 12:28:35 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:10.632 Creating new GPT entries in memory. 00:04:10.632 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:10.632 other utilities. 00:04:10.632 12:28:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:10.632 12:28:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:10.632 12:28:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:10.632 12:28:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:10.632 12:28:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:11.564 Creating new GPT entries in memory. 00:04:11.564 The operation has completed successfully. 00:04:11.564 12:28:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:11.564 12:28:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:11.564 12:28:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:11.564 12:28:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:11.564 12:28:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:12.936 The operation has completed successfully. 00:04:12.936 12:28:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:12.936 12:28:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.936 12:28:38 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57558 00:04:12.936 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:12.936 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:12.936 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:12.936 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:12.936 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:12.936 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:12.936 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:12.936 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:12.936 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:12.936 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:12.936 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:12.936 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:12.936 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:12.937 12:28:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.195 12:28:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:13.452 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.452 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:13.452 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:13.452 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.452 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.453 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.710 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.710 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.710 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:13.710 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.710 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.710 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:13.710 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:13.710 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:13.710 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:13.710 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:13.710 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:13.710 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.710 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:13.710 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:13.710 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:13.710 12:28:39 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:13.967 00:04:13.967 real 0m4.208s 00:04:13.967 user 0m0.443s 00:04:13.967 sys 0m0.735s 00:04:13.967 12:28:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.967 ************************************ 00:04:13.967 END TEST dm_mount 00:04:13.967 ************************************ 00:04:13.967 12:28:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:13.967 12:28:39 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:13.967 12:28:39 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:13.967 12:28:39 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:13.967 12:28:39 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.967 12:28:39 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.967 12:28:39 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:13.967 12:28:39 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.967 12:28:39 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:14.224 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:14.224 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:14.224 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:14.224 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:14.224 12:28:40 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:14.224 12:28:40 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:14.224 12:28:40 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:14.224 12:28:40 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:14.224 12:28:40 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:14.224 12:28:40 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:14.224 12:28:40 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:14.224 ************************************ 00:04:14.224 END TEST devices 00:04:14.224 ************************************ 00:04:14.224 00:04:14.224 real 0m9.730s 00:04:14.224 user 0m1.823s 00:04:14.224 sys 0m2.324s 00:04:14.224 12:28:40 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.224 12:28:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:14.224 12:28:40 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:14.224 00:04:14.224 real 0m21.459s 00:04:14.224 user 0m6.914s 00:04:14.224 sys 0m8.975s 00:04:14.224 12:28:40 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.224 12:28:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:14.224 ************************************ 00:04:14.224 END TEST setup.sh 00:04:14.224 ************************************ 00:04:14.224 12:28:40 -- common/autotest_common.sh@1142 -- # return 0 00:04:14.224 12:28:40 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:14.787 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:14.787 Hugepages 00:04:14.787 node hugesize free / total 00:04:14.787 node0 1048576kB 0 / 0 00:04:14.787 node0 2048kB 2048 / 2048 00:04:14.787 00:04:14.787 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.045 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:15.045 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:15.045 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:15.045 12:28:41 -- spdk/autotest.sh@130 -- # uname -s 00:04:15.045 12:28:41 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:15.045 12:28:41 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:15.045 12:28:41 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:15.992 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.992 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.992 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.992 12:28:41 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:16.923 12:28:42 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:16.923 12:28:42 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:16.923 12:28:42 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:16.923 12:28:42 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:16.923 12:28:42 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:16.923 12:28:42 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:16.923 12:28:42 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:16.923 12:28:42 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:16.923 12:28:42 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:17.180 12:28:43 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:17.180 12:28:43 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:17.180 12:28:43 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.443 Waiting for block devices as requested 00:04:17.443 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:17.702 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:17.702 12:28:43 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:17.702 12:28:43 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:17.702 12:28:43 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:17.702 12:28:43 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:17.702 12:28:43 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:17.702 12:28:43 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:17.702 12:28:43 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:17.702 12:28:43 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:04:17.702 12:28:43 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:04:17.702 12:28:43 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:04:17.702 12:28:43 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:04:17.702 12:28:43 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:17.702 12:28:43 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:17.702 12:28:43 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:17.702 12:28:43 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:17.702 12:28:43 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:17.702 12:28:43 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:04:17.702 12:28:43 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:17.702 12:28:43 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:17.702 12:28:43 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:17.702 12:28:43 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:17.702 12:28:43 -- common/autotest_common.sh@1557 -- # continue 00:04:17.702 12:28:43 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:17.702 12:28:43 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:17.702 12:28:43 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:17.702 12:28:43 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:04:17.702 12:28:43 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:17.702 12:28:43 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:17.702 12:28:43 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:17.702 12:28:43 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:17.702 12:28:43 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:17.702 12:28:43 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:17.702 12:28:43 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:17.702 12:28:43 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:17.702 12:28:43 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:17.702 12:28:43 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:17.702 12:28:43 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:17.702 12:28:43 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:17.702 12:28:43 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:17.702 12:28:43 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:17.702 12:28:43 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:17.702 12:28:43 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:17.702 12:28:43 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:17.702 12:28:43 -- common/autotest_common.sh@1557 -- # continue 00:04:17.702 12:28:43 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:17.702 12:28:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:17.702 12:28:43 -- common/autotest_common.sh@10 -- # set +x 00:04:17.702 12:28:43 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:17.702 12:28:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:17.702 12:28:43 -- common/autotest_common.sh@10 -- # set +x 00:04:17.702 12:28:43 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:18.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.541 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.541 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:18.541 12:28:44 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:18.541 12:28:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.541 12:28:44 -- common/autotest_common.sh@10 -- # set +x 00:04:18.541 12:28:44 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:18.541 12:28:44 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:18.541 12:28:44 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:18.541 12:28:44 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:18.541 12:28:44 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:18.541 12:28:44 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:18.541 12:28:44 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:18.541 12:28:44 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:18.541 12:28:44 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:18.798 12:28:44 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:18.798 12:28:44 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:18.798 12:28:44 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:04:18.798 12:28:44 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:18.799 12:28:44 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:18.799 12:28:44 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:18.799 12:28:44 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:18.799 12:28:44 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.799 12:28:44 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:18.799 12:28:44 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:18.799 12:28:44 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:18.799 12:28:44 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:18.799 12:28:44 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:18.799 12:28:44 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:18.799 12:28:44 -- common/autotest_common.sh@1593 -- # return 0 00:04:18.799 12:28:44 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:18.799 12:28:44 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:18.799 12:28:44 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:18.799 12:28:44 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:18.799 12:28:44 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:18.799 12:28:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:18.799 12:28:44 -- common/autotest_common.sh@10 -- # set +x 00:04:18.799 12:28:44 -- spdk/autotest.sh@164 -- # [[ 1 -eq 1 ]] 00:04:18.799 12:28:44 -- spdk/autotest.sh@165 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:18.799 12:28:44 -- spdk/autotest.sh@165 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:18.799 12:28:44 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:18.799 12:28:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.799 12:28:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.799 12:28:44 -- common/autotest_common.sh@10 -- # set +x 00:04:18.799 ************************************ 00:04:18.799 START TEST env 00:04:18.799 ************************************ 00:04:18.799 12:28:44 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:18.799 * Looking for test storage... 00:04:18.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:18.799 12:28:44 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:18.799 12:28:44 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.799 12:28:44 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.799 12:28:44 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.799 ************************************ 00:04:18.799 START TEST env_memory 00:04:18.799 ************************************ 00:04:18.799 12:28:44 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:18.799 00:04:18.799 00:04:18.799 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.799 http://cunit.sourceforge.net/ 00:04:18.799 00:04:18.799 00:04:18.799 Suite: memory 00:04:18.799 Test: alloc and free memory map ...[2024-07-12 12:28:44.846305] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:18.799 passed 00:04:19.063 Test: mem map translation ...[2024-07-12 12:28:44.877718] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:19.063 [2024-07-12 12:28:44.877916] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:19.063 [2024-07-12 12:28:44.878102] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:19.063 [2024-07-12 12:28:44.878306] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:19.063 passed 00:04:19.063 Test: mem map registration ...[2024-07-12 12:28:44.942034] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:19.063 [2024-07-12 12:28:44.942070] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:19.063 passed 00:04:19.063 Test: mem map adjacent registrations ...passed 00:04:19.063 00:04:19.063 Run Summary: Type Total Ran Passed Failed Inactive 00:04:19.063 suites 1 1 n/a 0 0 00:04:19.063 tests 4 4 4 0 0 00:04:19.063 asserts 152 152 152 0 n/a 00:04:19.063 00:04:19.064 Elapsed time = 0.213 seconds 00:04:19.064 00:04:19.064 real 0m0.230s 00:04:19.064 user 0m0.217s 00:04:19.064 sys 0m0.009s 00:04:19.064 ************************************ 00:04:19.064 END TEST env_memory 00:04:19.064 ************************************ 00:04:19.064 12:28:45 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.064 12:28:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:19.064 12:28:45 env -- common/autotest_common.sh@1142 -- # return 0 00:04:19.064 12:28:45 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:19.064 12:28:45 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.064 12:28:45 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.064 12:28:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:19.064 ************************************ 00:04:19.064 START TEST env_vtophys 00:04:19.064 ************************************ 00:04:19.064 12:28:45 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:19.064 EAL: lib.eal log level changed from notice to debug 00:04:19.064 EAL: Detected lcore 0 as core 0 on socket 0 00:04:19.064 EAL: Detected lcore 1 as core 0 on socket 0 00:04:19.064 EAL: Detected lcore 2 as core 0 on socket 0 00:04:19.064 EAL: Detected lcore 3 as core 0 on socket 0 00:04:19.064 EAL: Detected lcore 4 as core 0 on socket 0 00:04:19.064 EAL: Detected lcore 5 as core 0 on socket 0 00:04:19.064 EAL: Detected lcore 6 as core 0 on socket 0 00:04:19.064 EAL: Detected lcore 7 as core 0 on socket 0 00:04:19.064 EAL: Detected lcore 8 as core 0 on socket 0 00:04:19.064 EAL: Detected lcore 9 as core 0 on socket 0 00:04:19.064 EAL: Maximum logical cores by configuration: 128 00:04:19.064 EAL: Detected CPU lcores: 10 00:04:19.064 EAL: Detected NUMA nodes: 1 00:04:19.064 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:19.064 EAL: Detected shared linkage of DPDK 00:04:19.064 EAL: No shared files mode enabled, IPC will be disabled 00:04:19.064 EAL: Selected IOVA mode 'PA' 00:04:19.064 EAL: Probing VFIO support... 00:04:19.064 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:19.064 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:19.064 EAL: Ask a virtual area of 0x2e000 bytes 00:04:19.064 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:19.064 EAL: Setting up physically contiguous memory... 00:04:19.064 EAL: Setting maximum number of open files to 524288 00:04:19.064 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:19.064 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:19.064 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.064 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:19.064 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.064 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.064 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:19.064 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:19.064 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.064 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:19.064 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.064 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.064 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:19.064 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:19.064 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.064 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:19.064 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.064 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.064 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:19.064 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:19.064 EAL: Ask a virtual area of 0x61000 bytes 00:04:19.064 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:19.064 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:19.064 EAL: Ask a virtual area of 0x400000000 bytes 00:04:19.064 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:19.064 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:19.064 EAL: Hugepages will be freed exactly as allocated. 00:04:19.064 EAL: No shared files mode enabled, IPC is disabled 00:04:19.064 EAL: No shared files mode enabled, IPC is disabled 00:04:19.346 EAL: TSC frequency is ~2200000 KHz 00:04:19.346 EAL: Main lcore 0 is ready (tid=7f070aa92a00;cpuset=[0]) 00:04:19.346 EAL: Trying to obtain current memory policy. 00:04:19.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.346 EAL: Restoring previous memory policy: 0 00:04:19.346 EAL: request: mp_malloc_sync 00:04:19.346 EAL: No shared files mode enabled, IPC is disabled 00:04:19.346 EAL: Heap on socket 0 was expanded by 2MB 00:04:19.346 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:19.346 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:19.346 EAL: Mem event callback 'spdk:(nil)' registered 00:04:19.346 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:19.346 00:04:19.346 00:04:19.346 CUnit - A unit testing framework for C - Version 2.1-3 00:04:19.346 http://cunit.sourceforge.net/ 00:04:19.346 00:04:19.346 00:04:19.346 Suite: components_suite 00:04:19.346 Test: vtophys_malloc_test ...passed 00:04:19.346 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:19.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.346 EAL: Restoring previous memory policy: 4 00:04:19.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.346 EAL: request: mp_malloc_sync 00:04:19.346 EAL: No shared files mode enabled, IPC is disabled 00:04:19.346 EAL: Heap on socket 0 was expanded by 4MB 00:04:19.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.346 EAL: request: mp_malloc_sync 00:04:19.346 EAL: No shared files mode enabled, IPC is disabled 00:04:19.346 EAL: Heap on socket 0 was shrunk by 4MB 00:04:19.346 EAL: Trying to obtain current memory policy. 00:04:19.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.346 EAL: Restoring previous memory policy: 4 00:04:19.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.346 EAL: request: mp_malloc_sync 00:04:19.346 EAL: No shared files mode enabled, IPC is disabled 00:04:19.346 EAL: Heap on socket 0 was expanded by 6MB 00:04:19.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.346 EAL: request: mp_malloc_sync 00:04:19.346 EAL: No shared files mode enabled, IPC is disabled 00:04:19.346 EAL: Heap on socket 0 was shrunk by 6MB 00:04:19.346 EAL: Trying to obtain current memory policy. 00:04:19.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.346 EAL: Restoring previous memory policy: 4 00:04:19.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.346 EAL: request: mp_malloc_sync 00:04:19.346 EAL: No shared files mode enabled, IPC is disabled 00:04:19.346 EAL: Heap on socket 0 was expanded by 10MB 00:04:19.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.346 EAL: request: mp_malloc_sync 00:04:19.346 EAL: No shared files mode enabled, IPC is disabled 00:04:19.346 EAL: Heap on socket 0 was shrunk by 10MB 00:04:19.346 EAL: Trying to obtain current memory policy. 00:04:19.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.346 EAL: Restoring previous memory policy: 4 00:04:19.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.346 EAL: request: mp_malloc_sync 00:04:19.346 EAL: No shared files mode enabled, IPC is disabled 00:04:19.346 EAL: Heap on socket 0 was expanded by 18MB 00:04:19.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.346 EAL: request: mp_malloc_sync 00:04:19.346 EAL: No shared files mode enabled, IPC is disabled 00:04:19.346 EAL: Heap on socket 0 was shrunk by 18MB 00:04:19.346 EAL: Trying to obtain current memory policy. 00:04:19.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.346 EAL: Restoring previous memory policy: 4 00:04:19.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.346 EAL: request: mp_malloc_sync 00:04:19.346 EAL: No shared files mode enabled, IPC is disabled 00:04:19.346 EAL: Heap on socket 0 was expanded by 34MB 00:04:19.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.346 EAL: request: mp_malloc_sync 00:04:19.346 EAL: No shared files mode enabled, IPC is disabled 00:04:19.346 EAL: Heap on socket 0 was shrunk by 34MB 00:04:19.346 EAL: Trying to obtain current memory policy. 00:04:19.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.346 EAL: Restoring previous memory policy: 4 00:04:19.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.346 EAL: request: mp_malloc_sync 00:04:19.346 EAL: No shared files mode enabled, IPC is disabled 00:04:19.346 EAL: Heap on socket 0 was expanded by 66MB 00:04:19.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.346 EAL: request: mp_malloc_sync 00:04:19.346 EAL: No shared files mode enabled, IPC is disabled 00:04:19.346 EAL: Heap on socket 0 was shrunk by 66MB 00:04:19.346 EAL: Trying to obtain current memory policy. 00:04:19.346 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.346 EAL: Restoring previous memory policy: 4 00:04:19.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.346 EAL: request: mp_malloc_sync 00:04:19.346 EAL: No shared files mode enabled, IPC is disabled 00:04:19.346 EAL: Heap on socket 0 was expanded by 130MB 00:04:19.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.605 EAL: request: mp_malloc_sync 00:04:19.605 EAL: No shared files mode enabled, IPC is disabled 00:04:19.605 EAL: Heap on socket 0 was shrunk by 130MB 00:04:19.605 EAL: Trying to obtain current memory policy. 00:04:19.605 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.605 EAL: Restoring previous memory policy: 4 00:04:19.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.605 EAL: request: mp_malloc_sync 00:04:19.605 EAL: No shared files mode enabled, IPC is disabled 00:04:19.605 EAL: Heap on socket 0 was expanded by 258MB 00:04:19.605 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.864 EAL: request: mp_malloc_sync 00:04:19.864 EAL: No shared files mode enabled, IPC is disabled 00:04:19.864 EAL: Heap on socket 0 was shrunk by 258MB 00:04:19.864 EAL: Trying to obtain current memory policy. 00:04:19.864 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.864 EAL: Restoring previous memory policy: 4 00:04:19.864 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.864 EAL: request: mp_malloc_sync 00:04:19.864 EAL: No shared files mode enabled, IPC is disabled 00:04:19.864 EAL: Heap on socket 0 was expanded by 514MB 00:04:20.122 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.380 EAL: request: mp_malloc_sync 00:04:20.380 EAL: No shared files mode enabled, IPC is disabled 00:04:20.380 EAL: Heap on socket 0 was shrunk by 514MB 00:04:20.380 EAL: Trying to obtain current memory policy. 00:04:20.380 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.638 EAL: Restoring previous memory policy: 4 00:04:20.638 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.638 EAL: request: mp_malloc_sync 00:04:20.638 EAL: No shared files mode enabled, IPC is disabled 00:04:20.638 EAL: Heap on socket 0 was expanded by 1026MB 00:04:20.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.155 passed 00:04:21.155 00:04:21.155 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.155 suites 1 1 n/a 0 0 00:04:21.155 tests 2 2 2 0 0 00:04:21.155 asserts 5246 5246 5246 0 n/a 00:04:21.155 00:04:21.155 Elapsed time = 1.918 seconds 00:04:21.155 EAL: request: mp_malloc_sync 00:04:21.155 EAL: No shared files mode enabled, IPC is disabled 00:04:21.155 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:21.155 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.155 EAL: request: mp_malloc_sync 00:04:21.155 EAL: No shared files mode enabled, IPC is disabled 00:04:21.155 EAL: Heap on socket 0 was shrunk by 2MB 00:04:21.155 EAL: No shared files mode enabled, IPC is disabled 00:04:21.155 EAL: No shared files mode enabled, IPC is disabled 00:04:21.155 EAL: No shared files mode enabled, IPC is disabled 00:04:21.155 ************************************ 00:04:21.155 END TEST env_vtophys 00:04:21.155 ************************************ 00:04:21.155 00:04:21.155 real 0m2.120s 00:04:21.155 user 0m1.221s 00:04:21.155 sys 0m0.761s 00:04:21.155 12:28:47 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.155 12:28:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:21.413 12:28:47 env -- common/autotest_common.sh@1142 -- # return 0 00:04:21.413 12:28:47 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:21.413 12:28:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.413 12:28:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.413 12:28:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.413 ************************************ 00:04:21.413 START TEST env_pci 00:04:21.413 ************************************ 00:04:21.413 12:28:47 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:21.413 00:04:21.413 00:04:21.413 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.413 http://cunit.sourceforge.net/ 00:04:21.413 00:04:21.413 00:04:21.413 Suite: pci 00:04:21.413 Test: pci_hook ...[2024-07-12 12:28:47.261805] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58756 has claimed it 00:04:21.413 passed 00:04:21.413 00:04:21.413 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.413 suites 1 1 n/a 0 0 00:04:21.413 tests 1 1 1 0 0 00:04:21.413 asserts 25 25 25 0 n/a 00:04:21.413 00:04:21.413 Elapsed time = 0.003 seconds 00:04:21.413 EAL: Cannot find device (10000:00:01.0) 00:04:21.413 EAL: Failed to attach device on primary process 00:04:21.413 00:04:21.413 real 0m0.022s 00:04:21.413 user 0m0.010s 00:04:21.413 sys 0m0.012s 00:04:21.413 12:28:47 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.413 12:28:47 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:21.413 ************************************ 00:04:21.413 END TEST env_pci 00:04:21.413 ************************************ 00:04:21.413 12:28:47 env -- common/autotest_common.sh@1142 -- # return 0 00:04:21.413 12:28:47 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:21.413 12:28:47 env -- env/env.sh@15 -- # uname 00:04:21.413 12:28:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:21.413 12:28:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:21.413 12:28:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.413 12:28:47 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:21.413 12:28:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.413 12:28:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.413 ************************************ 00:04:21.413 START TEST env_dpdk_post_init 00:04:21.413 ************************************ 00:04:21.413 12:28:47 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:21.413 EAL: Detected CPU lcores: 10 00:04:21.413 EAL: Detected NUMA nodes: 1 00:04:21.413 EAL: Detected shared linkage of DPDK 00:04:21.413 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:21.413 EAL: Selected IOVA mode 'PA' 00:04:21.413 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:21.671 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:21.671 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:21.671 Starting DPDK initialization... 00:04:21.671 Starting SPDK post initialization... 00:04:21.671 SPDK NVMe probe 00:04:21.671 Attaching to 0000:00:10.0 00:04:21.671 Attaching to 0000:00:11.0 00:04:21.671 Attached to 0000:00:10.0 00:04:21.671 Attached to 0000:00:11.0 00:04:21.671 Cleaning up... 00:04:21.671 ************************************ 00:04:21.671 END TEST env_dpdk_post_init 00:04:21.671 ************************************ 00:04:21.671 00:04:21.671 real 0m0.182s 00:04:21.671 user 0m0.049s 00:04:21.671 sys 0m0.031s 00:04:21.671 12:28:47 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.671 12:28:47 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:21.671 12:28:47 env -- common/autotest_common.sh@1142 -- # return 0 00:04:21.671 12:28:47 env -- env/env.sh@26 -- # uname 00:04:21.671 12:28:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:21.671 12:28:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:21.671 12:28:47 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.671 12:28:47 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.671 12:28:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.671 ************************************ 00:04:21.671 START TEST env_mem_callbacks 00:04:21.671 ************************************ 00:04:21.671 12:28:47 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:21.671 EAL: Detected CPU lcores: 10 00:04:21.671 EAL: Detected NUMA nodes: 1 00:04:21.672 EAL: Detected shared linkage of DPDK 00:04:21.672 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:21.672 EAL: Selected IOVA mode 'PA' 00:04:21.672 00:04:21.672 00:04:21.672 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.672 http://cunit.sourceforge.net/ 00:04:21.672 00:04:21.672 00:04:21.672 Suite: memory 00:04:21.672 Test: test ... 00:04:21.672 register 0x200000200000 2097152 00:04:21.672 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:21.672 malloc 3145728 00:04:21.672 register 0x200000400000 4194304 00:04:21.672 buf 0x200000500000 len 3145728 PASSED 00:04:21.672 malloc 64 00:04:21.672 buf 0x2000004fff40 len 64 PASSED 00:04:21.672 malloc 4194304 00:04:21.672 register 0x200000800000 6291456 00:04:21.672 buf 0x200000a00000 len 4194304 PASSED 00:04:21.672 free 0x200000500000 3145728 00:04:21.672 free 0x2000004fff40 64 00:04:21.672 unregister 0x200000400000 4194304 PASSED 00:04:21.672 free 0x200000a00000 4194304 00:04:21.672 unregister 0x200000800000 6291456 PASSED 00:04:21.672 malloc 8388608 00:04:21.672 register 0x200000400000 10485760 00:04:21.672 buf 0x200000600000 len 8388608 PASSED 00:04:21.672 free 0x200000600000 8388608 00:04:21.672 unregister 0x200000400000 10485760 PASSED 00:04:21.672 passed 00:04:21.672 00:04:21.672 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.672 suites 1 1 n/a 0 0 00:04:21.672 tests 1 1 1 0 0 00:04:21.672 asserts 15 15 15 0 n/a 00:04:21.672 00:04:21.672 Elapsed time = 0.009 seconds 00:04:21.672 00:04:21.672 real 0m0.145s 00:04:21.672 user 0m0.015s 00:04:21.672 sys 0m0.028s 00:04:21.672 12:28:47 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.672 12:28:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:21.672 ************************************ 00:04:21.672 END TEST env_mem_callbacks 00:04:21.672 ************************************ 00:04:21.930 12:28:47 env -- common/autotest_common.sh@1142 -- # return 0 00:04:21.930 00:04:21.930 real 0m3.049s 00:04:21.930 user 0m1.636s 00:04:21.930 sys 0m1.048s 00:04:21.930 12:28:47 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.930 12:28:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.930 ************************************ 00:04:21.930 END TEST env 00:04:21.930 ************************************ 00:04:21.930 12:28:47 -- common/autotest_common.sh@1142 -- # return 0 00:04:21.930 12:28:47 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:21.930 12:28:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.930 12:28:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.930 12:28:47 -- common/autotest_common.sh@10 -- # set +x 00:04:21.930 ************************************ 00:04:21.930 START TEST rpc 00:04:21.930 ************************************ 00:04:21.930 12:28:47 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:21.930 * Looking for test storage... 00:04:21.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:21.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.930 12:28:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58866 00:04:21.930 12:28:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.930 12:28:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58866 00:04:21.930 12:28:47 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:21.930 12:28:47 rpc -- common/autotest_common.sh@829 -- # '[' -z 58866 ']' 00:04:21.930 12:28:47 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.930 12:28:47 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:21.930 12:28:47 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.930 12:28:47 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:21.930 12:28:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.930 [2024-07-12 12:28:47.955248] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:04:21.930 [2024-07-12 12:28:47.956528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58866 ] 00:04:22.188 [2024-07-12 12:28:48.095456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.188 [2024-07-12 12:28:48.214916] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:22.188 [2024-07-12 12:28:48.214994] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58866' to capture a snapshot of events at runtime. 00:04:22.188 [2024-07-12 12:28:48.215007] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:22.188 [2024-07-12 12:28:48.215016] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:22.188 [2024-07-12 12:28:48.215024] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58866 for offline analysis/debug. 00:04:22.188 [2024-07-12 12:28:48.215051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.446 [2024-07-12 12:28:48.300241] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:23.013 12:28:48 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:23.013 12:28:48 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:23.013 12:28:48 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.013 12:28:48 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.013 12:28:48 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:23.013 12:28:48 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:23.013 12:28:48 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.013 12:28:48 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.013 12:28:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.013 ************************************ 00:04:23.013 START TEST rpc_integrity 00:04:23.013 ************************************ 00:04:23.013 12:28:48 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:23.013 12:28:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:23.013 12:28:48 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.013 12:28:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.013 12:28:48 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.013 12:28:48 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:23.013 12:28:48 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:23.013 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:23.013 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:23.013 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.013 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.013 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.013 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:23.013 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.013 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.013 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.013 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.013 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.013 { 00:04:23.013 "name": "Malloc0", 00:04:23.013 "aliases": [ 00:04:23.013 "0b92a665-9058-477b-a157-0458f4ab252d" 00:04:23.013 ], 00:04:23.013 "product_name": "Malloc disk", 00:04:23.013 "block_size": 512, 00:04:23.013 "num_blocks": 16384, 00:04:23.013 "uuid": "0b92a665-9058-477b-a157-0458f4ab252d", 00:04:23.013 "assigned_rate_limits": { 00:04:23.013 "rw_ios_per_sec": 0, 00:04:23.013 "rw_mbytes_per_sec": 0, 00:04:23.013 "r_mbytes_per_sec": 0, 00:04:23.013 "w_mbytes_per_sec": 0 00:04:23.013 }, 00:04:23.013 "claimed": false, 00:04:23.013 "zoned": false, 00:04:23.013 "supported_io_types": { 00:04:23.013 "read": true, 00:04:23.013 "write": true, 00:04:23.013 "unmap": true, 00:04:23.013 "flush": true, 00:04:23.013 "reset": true, 00:04:23.013 "nvme_admin": false, 00:04:23.013 "nvme_io": false, 00:04:23.013 "nvme_io_md": false, 00:04:23.013 "write_zeroes": true, 00:04:23.013 "zcopy": true, 00:04:23.013 "get_zone_info": false, 00:04:23.013 "zone_management": false, 00:04:23.013 "zone_append": false, 00:04:23.013 "compare": false, 00:04:23.013 "compare_and_write": false, 00:04:23.013 "abort": true, 00:04:23.013 "seek_hole": false, 00:04:23.013 "seek_data": false, 00:04:23.013 "copy": true, 00:04:23.013 "nvme_iov_md": false 00:04:23.013 }, 00:04:23.013 "memory_domains": [ 00:04:23.013 { 00:04:23.013 "dma_device_id": "system", 00:04:23.013 "dma_device_type": 1 00:04:23.013 }, 00:04:23.013 { 00:04:23.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.013 "dma_device_type": 2 00:04:23.013 } 00:04:23.013 ], 00:04:23.013 "driver_specific": {} 00:04:23.013 } 00:04:23.013 ]' 00:04:23.013 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:23.270 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:23.271 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.271 [2024-07-12 12:28:49.109008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:23.271 [2024-07-12 12:28:49.109090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:23.271 [2024-07-12 12:28:49.109116] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x217eda0 00:04:23.271 [2024-07-12 12:28:49.109126] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:23.271 [2024-07-12 12:28:49.111006] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:23.271 [2024-07-12 12:28:49.111043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:23.271 Passthru0 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.271 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.271 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:23.271 { 00:04:23.271 "name": "Malloc0", 00:04:23.271 "aliases": [ 00:04:23.271 "0b92a665-9058-477b-a157-0458f4ab252d" 00:04:23.271 ], 00:04:23.271 "product_name": "Malloc disk", 00:04:23.271 "block_size": 512, 00:04:23.271 "num_blocks": 16384, 00:04:23.271 "uuid": "0b92a665-9058-477b-a157-0458f4ab252d", 00:04:23.271 "assigned_rate_limits": { 00:04:23.271 "rw_ios_per_sec": 0, 00:04:23.271 "rw_mbytes_per_sec": 0, 00:04:23.271 "r_mbytes_per_sec": 0, 00:04:23.271 "w_mbytes_per_sec": 0 00:04:23.271 }, 00:04:23.271 "claimed": true, 00:04:23.271 "claim_type": "exclusive_write", 00:04:23.271 "zoned": false, 00:04:23.271 "supported_io_types": { 00:04:23.271 "read": true, 00:04:23.271 "write": true, 00:04:23.271 "unmap": true, 00:04:23.271 "flush": true, 00:04:23.271 "reset": true, 00:04:23.271 "nvme_admin": false, 00:04:23.271 "nvme_io": false, 00:04:23.271 "nvme_io_md": false, 00:04:23.271 "write_zeroes": true, 00:04:23.271 "zcopy": true, 00:04:23.271 "get_zone_info": false, 00:04:23.271 "zone_management": false, 00:04:23.271 "zone_append": false, 00:04:23.271 "compare": false, 00:04:23.271 "compare_and_write": false, 00:04:23.271 "abort": true, 00:04:23.271 "seek_hole": false, 00:04:23.271 "seek_data": false, 00:04:23.271 "copy": true, 00:04:23.271 "nvme_iov_md": false 00:04:23.271 }, 00:04:23.271 "memory_domains": [ 00:04:23.271 { 00:04:23.271 "dma_device_id": "system", 00:04:23.271 "dma_device_type": 1 00:04:23.271 }, 00:04:23.271 { 00:04:23.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.271 "dma_device_type": 2 00:04:23.271 } 00:04:23.271 ], 00:04:23.271 "driver_specific": {} 00:04:23.271 }, 00:04:23.271 { 00:04:23.271 "name": "Passthru0", 00:04:23.271 "aliases": [ 00:04:23.271 "ed631def-a790-59a9-a15c-53eb19cc541f" 00:04:23.271 ], 00:04:23.271 "product_name": "passthru", 00:04:23.271 "block_size": 512, 00:04:23.271 "num_blocks": 16384, 00:04:23.271 "uuid": "ed631def-a790-59a9-a15c-53eb19cc541f", 00:04:23.271 "assigned_rate_limits": { 00:04:23.271 "rw_ios_per_sec": 0, 00:04:23.271 "rw_mbytes_per_sec": 0, 00:04:23.271 "r_mbytes_per_sec": 0, 00:04:23.271 "w_mbytes_per_sec": 0 00:04:23.271 }, 00:04:23.271 "claimed": false, 00:04:23.271 "zoned": false, 00:04:23.271 "supported_io_types": { 00:04:23.271 "read": true, 00:04:23.271 "write": true, 00:04:23.271 "unmap": true, 00:04:23.271 "flush": true, 00:04:23.271 "reset": true, 00:04:23.271 "nvme_admin": false, 00:04:23.271 "nvme_io": false, 00:04:23.271 "nvme_io_md": false, 00:04:23.271 "write_zeroes": true, 00:04:23.271 "zcopy": true, 00:04:23.271 "get_zone_info": false, 00:04:23.271 "zone_management": false, 00:04:23.271 "zone_append": false, 00:04:23.271 "compare": false, 00:04:23.271 "compare_and_write": false, 00:04:23.271 "abort": true, 00:04:23.271 "seek_hole": false, 00:04:23.271 "seek_data": false, 00:04:23.271 "copy": true, 00:04:23.271 "nvme_iov_md": false 00:04:23.271 }, 00:04:23.271 "memory_domains": [ 00:04:23.271 { 00:04:23.271 "dma_device_id": "system", 00:04:23.271 "dma_device_type": 1 00:04:23.271 }, 00:04:23.271 { 00:04:23.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.271 "dma_device_type": 2 00:04:23.271 } 00:04:23.271 ], 00:04:23.271 "driver_specific": { 00:04:23.271 "passthru": { 00:04:23.271 "name": "Passthru0", 00:04:23.271 "base_bdev_name": "Malloc0" 00:04:23.271 } 00:04:23.271 } 00:04:23.271 } 00:04:23.271 ]' 00:04:23.271 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:23.271 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:23.271 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.271 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.271 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.271 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:23.271 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:23.271 ************************************ 00:04:23.271 END TEST rpc_integrity 00:04:23.271 ************************************ 00:04:23.271 12:28:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:23.271 00:04:23.271 real 0m0.338s 00:04:23.271 user 0m0.219s 00:04:23.271 sys 0m0.045s 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.271 12:28:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.271 12:28:49 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:23.271 12:28:49 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:23.271 12:28:49 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.271 12:28:49 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.271 12:28:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.271 ************************************ 00:04:23.271 START TEST rpc_plugins 00:04:23.271 ************************************ 00:04:23.271 12:28:49 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:23.271 12:28:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:23.271 12:28:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.271 12:28:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.529 12:28:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.529 12:28:49 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:23.529 12:28:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:23.529 12:28:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.529 12:28:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.529 12:28:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.529 12:28:49 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:23.529 { 00:04:23.529 "name": "Malloc1", 00:04:23.529 "aliases": [ 00:04:23.529 "fe6040d0-3fec-4ef7-a26f-cc4f4a439083" 00:04:23.529 ], 00:04:23.529 "product_name": "Malloc disk", 00:04:23.529 "block_size": 4096, 00:04:23.529 "num_blocks": 256, 00:04:23.529 "uuid": "fe6040d0-3fec-4ef7-a26f-cc4f4a439083", 00:04:23.529 "assigned_rate_limits": { 00:04:23.529 "rw_ios_per_sec": 0, 00:04:23.529 "rw_mbytes_per_sec": 0, 00:04:23.529 "r_mbytes_per_sec": 0, 00:04:23.529 "w_mbytes_per_sec": 0 00:04:23.529 }, 00:04:23.529 "claimed": false, 00:04:23.529 "zoned": false, 00:04:23.529 "supported_io_types": { 00:04:23.529 "read": true, 00:04:23.529 "write": true, 00:04:23.529 "unmap": true, 00:04:23.529 "flush": true, 00:04:23.529 "reset": true, 00:04:23.529 "nvme_admin": false, 00:04:23.529 "nvme_io": false, 00:04:23.529 "nvme_io_md": false, 00:04:23.529 "write_zeroes": true, 00:04:23.529 "zcopy": true, 00:04:23.529 "get_zone_info": false, 00:04:23.529 "zone_management": false, 00:04:23.529 "zone_append": false, 00:04:23.529 "compare": false, 00:04:23.529 "compare_and_write": false, 00:04:23.529 "abort": true, 00:04:23.529 "seek_hole": false, 00:04:23.529 "seek_data": false, 00:04:23.529 "copy": true, 00:04:23.529 "nvme_iov_md": false 00:04:23.529 }, 00:04:23.529 "memory_domains": [ 00:04:23.529 { 00:04:23.529 "dma_device_id": "system", 00:04:23.529 "dma_device_type": 1 00:04:23.529 }, 00:04:23.529 { 00:04:23.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.529 "dma_device_type": 2 00:04:23.529 } 00:04:23.529 ], 00:04:23.530 "driver_specific": {} 00:04:23.530 } 00:04:23.530 ]' 00:04:23.530 12:28:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:23.530 12:28:49 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:23.530 12:28:49 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:23.530 12:28:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.530 12:28:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.530 12:28:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.530 12:28:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:23.530 12:28:49 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.530 12:28:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.530 12:28:49 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.530 12:28:49 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:23.530 12:28:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:23.530 ************************************ 00:04:23.530 END TEST rpc_plugins 00:04:23.530 ************************************ 00:04:23.530 12:28:49 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:23.530 00:04:23.530 real 0m0.167s 00:04:23.530 user 0m0.119s 00:04:23.530 sys 0m0.010s 00:04:23.530 12:28:49 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.530 12:28:49 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.530 12:28:49 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:23.530 12:28:49 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:23.530 12:28:49 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.530 12:28:49 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.530 12:28:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.530 ************************************ 00:04:23.530 START TEST rpc_trace_cmd_test 00:04:23.530 ************************************ 00:04:23.530 12:28:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:23.530 12:28:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:23.530 12:28:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:23.530 12:28:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:23.530 12:28:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:23.530 12:28:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:23.530 12:28:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:23.530 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58866", 00:04:23.530 "tpoint_group_mask": "0x8", 00:04:23.530 "iscsi_conn": { 00:04:23.530 "mask": "0x2", 00:04:23.530 "tpoint_mask": "0x0" 00:04:23.530 }, 00:04:23.530 "scsi": { 00:04:23.530 "mask": "0x4", 00:04:23.530 "tpoint_mask": "0x0" 00:04:23.530 }, 00:04:23.530 "bdev": { 00:04:23.530 "mask": "0x8", 00:04:23.530 "tpoint_mask": "0xffffffffffffffff" 00:04:23.530 }, 00:04:23.530 "nvmf_rdma": { 00:04:23.530 "mask": "0x10", 00:04:23.530 "tpoint_mask": "0x0" 00:04:23.530 }, 00:04:23.530 "nvmf_tcp": { 00:04:23.530 "mask": "0x20", 00:04:23.530 "tpoint_mask": "0x0" 00:04:23.530 }, 00:04:23.530 "ftl": { 00:04:23.530 "mask": "0x40", 00:04:23.530 "tpoint_mask": "0x0" 00:04:23.530 }, 00:04:23.530 "blobfs": { 00:04:23.530 "mask": "0x80", 00:04:23.530 "tpoint_mask": "0x0" 00:04:23.530 }, 00:04:23.530 "dsa": { 00:04:23.530 "mask": "0x200", 00:04:23.530 "tpoint_mask": "0x0" 00:04:23.530 }, 00:04:23.530 "thread": { 00:04:23.530 "mask": "0x400", 00:04:23.530 "tpoint_mask": "0x0" 00:04:23.530 }, 00:04:23.530 "nvme_pcie": { 00:04:23.530 "mask": "0x800", 00:04:23.530 "tpoint_mask": "0x0" 00:04:23.530 }, 00:04:23.530 "iaa": { 00:04:23.530 "mask": "0x1000", 00:04:23.530 "tpoint_mask": "0x0" 00:04:23.530 }, 00:04:23.530 "nvme_tcp": { 00:04:23.530 "mask": "0x2000", 00:04:23.530 "tpoint_mask": "0x0" 00:04:23.530 }, 00:04:23.530 "bdev_nvme": { 00:04:23.530 "mask": "0x4000", 00:04:23.530 "tpoint_mask": "0x0" 00:04:23.530 }, 00:04:23.530 "sock": { 00:04:23.530 "mask": "0x8000", 00:04:23.530 "tpoint_mask": "0x0" 00:04:23.530 } 00:04:23.530 }' 00:04:23.530 12:28:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:23.788 12:28:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:23.788 12:28:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:23.788 12:28:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:23.788 12:28:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:23.788 12:28:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:23.788 12:28:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:23.788 12:28:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:23.788 12:28:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:23.788 ************************************ 00:04:23.788 END TEST rpc_trace_cmd_test 00:04:23.788 ************************************ 00:04:23.788 12:28:49 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:23.788 00:04:23.788 real 0m0.293s 00:04:23.788 user 0m0.247s 00:04:23.788 sys 0m0.035s 00:04:23.788 12:28:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.788 12:28:49 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:24.047 12:28:49 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:24.047 12:28:49 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:24.047 12:28:49 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:24.047 12:28:49 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:24.047 12:28:49 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.047 12:28:49 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.047 12:28:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.047 ************************************ 00:04:24.047 START TEST rpc_daemon_integrity 00:04:24.047 ************************************ 00:04:24.047 12:28:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:24.047 12:28:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:24.047 12:28:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.047 12:28:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.047 12:28:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.047 12:28:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:24.047 12:28:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:24.047 12:28:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:24.047 12:28:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:24.047 12:28:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.047 12:28:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.047 12:28:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.047 12:28:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:24.047 12:28:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:24.047 12:28:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.047 12:28:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.047 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.047 12:28:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:24.047 { 00:04:24.047 "name": "Malloc2", 00:04:24.047 "aliases": [ 00:04:24.047 "e12d7be6-803c-4dc6-8836-8523ee97aa17" 00:04:24.047 ], 00:04:24.047 "product_name": "Malloc disk", 00:04:24.047 "block_size": 512, 00:04:24.047 "num_blocks": 16384, 00:04:24.047 "uuid": "e12d7be6-803c-4dc6-8836-8523ee97aa17", 00:04:24.047 "assigned_rate_limits": { 00:04:24.047 "rw_ios_per_sec": 0, 00:04:24.047 "rw_mbytes_per_sec": 0, 00:04:24.047 "r_mbytes_per_sec": 0, 00:04:24.047 "w_mbytes_per_sec": 0 00:04:24.047 }, 00:04:24.047 "claimed": false, 00:04:24.047 "zoned": false, 00:04:24.047 "supported_io_types": { 00:04:24.047 "read": true, 00:04:24.047 "write": true, 00:04:24.047 "unmap": true, 00:04:24.047 "flush": true, 00:04:24.047 "reset": true, 00:04:24.047 "nvme_admin": false, 00:04:24.047 "nvme_io": false, 00:04:24.047 "nvme_io_md": false, 00:04:24.047 "write_zeroes": true, 00:04:24.047 "zcopy": true, 00:04:24.047 "get_zone_info": false, 00:04:24.047 "zone_management": false, 00:04:24.047 "zone_append": false, 00:04:24.047 "compare": false, 00:04:24.047 "compare_and_write": false, 00:04:24.047 "abort": true, 00:04:24.047 "seek_hole": false, 00:04:24.047 "seek_data": false, 00:04:24.047 "copy": true, 00:04:24.047 "nvme_iov_md": false 00:04:24.047 }, 00:04:24.047 "memory_domains": [ 00:04:24.047 { 00:04:24.047 "dma_device_id": "system", 00:04:24.047 "dma_device_type": 1 00:04:24.047 }, 00:04:24.047 { 00:04:24.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.047 "dma_device_type": 2 00:04:24.047 } 00:04:24.047 ], 00:04:24.047 "driver_specific": {} 00:04:24.047 } 00:04:24.047 ]' 00:04:24.047 12:28:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:24.047 12:28:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:24.047 12:28:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:24.047 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.047 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.047 [2024-07-12 12:28:50.063268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:24.047 [2024-07-12 12:28:50.063343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:24.047 [2024-07-12 12:28:50.063387] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21e3be0 00:04:24.047 [2024-07-12 12:28:50.063397] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:24.047 [2024-07-12 12:28:50.065354] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:24.047 [2024-07-12 12:28:50.065503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:24.047 Passthru0 00:04:24.047 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.047 12:28:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:24.047 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.047 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.047 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.047 12:28:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:24.047 { 00:04:24.047 "name": "Malloc2", 00:04:24.047 "aliases": [ 00:04:24.047 "e12d7be6-803c-4dc6-8836-8523ee97aa17" 00:04:24.047 ], 00:04:24.047 "product_name": "Malloc disk", 00:04:24.047 "block_size": 512, 00:04:24.047 "num_blocks": 16384, 00:04:24.047 "uuid": "e12d7be6-803c-4dc6-8836-8523ee97aa17", 00:04:24.047 "assigned_rate_limits": { 00:04:24.047 "rw_ios_per_sec": 0, 00:04:24.047 "rw_mbytes_per_sec": 0, 00:04:24.047 "r_mbytes_per_sec": 0, 00:04:24.047 "w_mbytes_per_sec": 0 00:04:24.047 }, 00:04:24.047 "claimed": true, 00:04:24.047 "claim_type": "exclusive_write", 00:04:24.047 "zoned": false, 00:04:24.047 "supported_io_types": { 00:04:24.047 "read": true, 00:04:24.047 "write": true, 00:04:24.047 "unmap": true, 00:04:24.047 "flush": true, 00:04:24.047 "reset": true, 00:04:24.047 "nvme_admin": false, 00:04:24.047 "nvme_io": false, 00:04:24.047 "nvme_io_md": false, 00:04:24.047 "write_zeroes": true, 00:04:24.047 "zcopy": true, 00:04:24.047 "get_zone_info": false, 00:04:24.047 "zone_management": false, 00:04:24.047 "zone_append": false, 00:04:24.047 "compare": false, 00:04:24.047 "compare_and_write": false, 00:04:24.047 "abort": true, 00:04:24.047 "seek_hole": false, 00:04:24.047 "seek_data": false, 00:04:24.047 "copy": true, 00:04:24.047 "nvme_iov_md": false 00:04:24.047 }, 00:04:24.047 "memory_domains": [ 00:04:24.047 { 00:04:24.047 "dma_device_id": "system", 00:04:24.047 "dma_device_type": 1 00:04:24.047 }, 00:04:24.047 { 00:04:24.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.047 "dma_device_type": 2 00:04:24.047 } 00:04:24.047 ], 00:04:24.047 "driver_specific": {} 00:04:24.047 }, 00:04:24.047 { 00:04:24.047 "name": "Passthru0", 00:04:24.047 "aliases": [ 00:04:24.047 "2bf4e3c5-c0d1-5d52-9d6b-20c407fd9b5d" 00:04:24.047 ], 00:04:24.047 "product_name": "passthru", 00:04:24.047 "block_size": 512, 00:04:24.047 "num_blocks": 16384, 00:04:24.047 "uuid": "2bf4e3c5-c0d1-5d52-9d6b-20c407fd9b5d", 00:04:24.047 "assigned_rate_limits": { 00:04:24.047 "rw_ios_per_sec": 0, 00:04:24.047 "rw_mbytes_per_sec": 0, 00:04:24.047 "r_mbytes_per_sec": 0, 00:04:24.047 "w_mbytes_per_sec": 0 00:04:24.047 }, 00:04:24.047 "claimed": false, 00:04:24.047 "zoned": false, 00:04:24.047 "supported_io_types": { 00:04:24.047 "read": true, 00:04:24.047 "write": true, 00:04:24.047 "unmap": true, 00:04:24.047 "flush": true, 00:04:24.047 "reset": true, 00:04:24.047 "nvme_admin": false, 00:04:24.047 "nvme_io": false, 00:04:24.047 "nvme_io_md": false, 00:04:24.047 "write_zeroes": true, 00:04:24.047 "zcopy": true, 00:04:24.047 "get_zone_info": false, 00:04:24.047 "zone_management": false, 00:04:24.047 "zone_append": false, 00:04:24.047 "compare": false, 00:04:24.047 "compare_and_write": false, 00:04:24.047 "abort": true, 00:04:24.047 "seek_hole": false, 00:04:24.047 "seek_data": false, 00:04:24.047 "copy": true, 00:04:24.047 "nvme_iov_md": false 00:04:24.047 }, 00:04:24.047 "memory_domains": [ 00:04:24.047 { 00:04:24.047 "dma_device_id": "system", 00:04:24.047 "dma_device_type": 1 00:04:24.047 }, 00:04:24.047 { 00:04:24.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:24.047 "dma_device_type": 2 00:04:24.047 } 00:04:24.047 ], 00:04:24.047 "driver_specific": { 00:04:24.047 "passthru": { 00:04:24.047 "name": "Passthru0", 00:04:24.047 "base_bdev_name": "Malloc2" 00:04:24.048 } 00:04:24.048 } 00:04:24.048 } 00:04:24.048 ]' 00:04:24.048 12:28:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:24.306 ************************************ 00:04:24.306 END TEST rpc_daemon_integrity 00:04:24.306 ************************************ 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:24.306 00:04:24.306 real 0m0.344s 00:04:24.306 user 0m0.230s 00:04:24.306 sys 0m0.041s 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.306 12:28:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:24.306 12:28:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:24.306 12:28:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:24.306 12:28:50 rpc -- rpc/rpc.sh@84 -- # killprocess 58866 00:04:24.306 12:28:50 rpc -- common/autotest_common.sh@948 -- # '[' -z 58866 ']' 00:04:24.306 12:28:50 rpc -- common/autotest_common.sh@952 -- # kill -0 58866 00:04:24.306 12:28:50 rpc -- common/autotest_common.sh@953 -- # uname 00:04:24.306 12:28:50 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:24.306 12:28:50 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58866 00:04:24.306 killing process with pid 58866 00:04:24.306 12:28:50 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:24.306 12:28:50 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:24.306 12:28:50 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58866' 00:04:24.306 12:28:50 rpc -- common/autotest_common.sh@967 -- # kill 58866 00:04:24.306 12:28:50 rpc -- common/autotest_common.sh@972 -- # wait 58866 00:04:24.870 ************************************ 00:04:24.870 END TEST rpc 00:04:24.870 ************************************ 00:04:24.870 00:04:24.870 real 0m3.126s 00:04:24.870 user 0m3.917s 00:04:24.870 sys 0m0.785s 00:04:24.870 12:28:50 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.870 12:28:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.130 12:28:50 -- common/autotest_common.sh@1142 -- # return 0 00:04:25.130 12:28:50 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:25.130 12:28:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.130 12:28:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.130 12:28:50 -- common/autotest_common.sh@10 -- # set +x 00:04:25.130 ************************************ 00:04:25.130 START TEST skip_rpc 00:04:25.130 ************************************ 00:04:25.130 12:28:50 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:25.130 * Looking for test storage... 00:04:25.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.130 12:28:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:25.130 12:28:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:25.130 12:28:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:25.130 12:28:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.130 12:28:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.130 12:28:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.130 ************************************ 00:04:25.130 START TEST skip_rpc 00:04:25.130 ************************************ 00:04:25.130 12:28:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:25.130 12:28:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59064 00:04:25.130 12:28:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:25.130 12:28:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.130 12:28:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:25.130 [2024-07-12 12:28:51.157617] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:04:25.130 [2024-07-12 12:28:51.157749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59064 ] 00:04:25.388 [2024-07-12 12:28:51.297900] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.388 [2024-07-12 12:28:51.421140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.646 [2024-07-12 12:28:51.503268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59064 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 59064 ']' 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 59064 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59064 00:04:30.911 killing process with pid 59064 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59064' 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 59064 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 59064 00:04:30.911 00:04:30.911 real 0m5.662s 00:04:30.911 user 0m5.178s 00:04:30.911 sys 0m0.383s 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:30.911 ************************************ 00:04:30.911 12:28:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.911 END TEST skip_rpc 00:04:30.911 ************************************ 00:04:30.911 12:28:56 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:30.911 12:28:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:30.911 12:28:56 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.911 12:28:56 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.911 12:28:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.911 ************************************ 00:04:30.911 START TEST skip_rpc_with_json 00:04:30.911 ************************************ 00:04:30.911 12:28:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:30.911 12:28:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:30.911 12:28:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59156 00:04:30.911 12:28:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.911 12:28:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.911 12:28:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59156 00:04:30.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.911 12:28:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59156 ']' 00:04:30.911 12:28:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.911 12:28:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:30.911 12:28:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.911 12:28:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:30.911 12:28:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.911 [2024-07-12 12:28:56.871531] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:04:30.911 [2024-07-12 12:28:56.871632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59156 ] 00:04:31.168 [2024-07-12 12:28:57.010488] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.168 [2024-07-12 12:28:57.129431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.168 [2024-07-12 12:28:57.211758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:32.101 12:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:32.102 12:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:32.102 12:28:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:32.102 12:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.102 12:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.102 [2024-07-12 12:28:57.884236] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:32.102 request: 00:04:32.102 { 00:04:32.102 "trtype": "tcp", 00:04:32.102 "method": "nvmf_get_transports", 00:04:32.102 "req_id": 1 00:04:32.102 } 00:04:32.102 Got JSON-RPC error response 00:04:32.102 response: 00:04:32.102 { 00:04:32.102 "code": -19, 00:04:32.102 "message": "No such device" 00:04:32.102 } 00:04:32.102 12:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:32.102 12:28:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:32.102 12:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.102 12:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.102 [2024-07-12 12:28:57.896363] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:32.102 12:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.102 12:28:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:32.102 12:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.102 12:28:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.102 12:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.102 12:28:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:32.102 { 00:04:32.102 "subsystems": [ 00:04:32.102 { 00:04:32.102 "subsystem": "keyring", 00:04:32.102 "config": [] 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "subsystem": "iobuf", 00:04:32.102 "config": [ 00:04:32.102 { 00:04:32.102 "method": "iobuf_set_options", 00:04:32.102 "params": { 00:04:32.102 "small_pool_count": 8192, 00:04:32.102 "large_pool_count": 1024, 00:04:32.102 "small_bufsize": 8192, 00:04:32.102 "large_bufsize": 135168 00:04:32.102 } 00:04:32.102 } 00:04:32.102 ] 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "subsystem": "sock", 00:04:32.102 "config": [ 00:04:32.102 { 00:04:32.102 "method": "sock_set_default_impl", 00:04:32.102 "params": { 00:04:32.102 "impl_name": "uring" 00:04:32.102 } 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "method": "sock_impl_set_options", 00:04:32.102 "params": { 00:04:32.102 "impl_name": "ssl", 00:04:32.102 "recv_buf_size": 4096, 00:04:32.102 "send_buf_size": 4096, 00:04:32.102 "enable_recv_pipe": true, 00:04:32.102 "enable_quickack": false, 00:04:32.102 "enable_placement_id": 0, 00:04:32.102 "enable_zerocopy_send_server": true, 00:04:32.102 "enable_zerocopy_send_client": false, 00:04:32.102 "zerocopy_threshold": 0, 00:04:32.102 "tls_version": 0, 00:04:32.102 "enable_ktls": false 00:04:32.102 } 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "method": "sock_impl_set_options", 00:04:32.102 "params": { 00:04:32.102 "impl_name": "posix", 00:04:32.102 "recv_buf_size": 2097152, 00:04:32.102 "send_buf_size": 2097152, 00:04:32.102 "enable_recv_pipe": true, 00:04:32.102 "enable_quickack": false, 00:04:32.102 "enable_placement_id": 0, 00:04:32.102 "enable_zerocopy_send_server": true, 00:04:32.102 "enable_zerocopy_send_client": false, 00:04:32.102 "zerocopy_threshold": 0, 00:04:32.102 "tls_version": 0, 00:04:32.102 "enable_ktls": false 00:04:32.102 } 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "method": "sock_impl_set_options", 00:04:32.102 "params": { 00:04:32.102 "impl_name": "uring", 00:04:32.102 "recv_buf_size": 2097152, 00:04:32.102 "send_buf_size": 2097152, 00:04:32.102 "enable_recv_pipe": true, 00:04:32.102 "enable_quickack": false, 00:04:32.102 "enable_placement_id": 0, 00:04:32.102 "enable_zerocopy_send_server": false, 00:04:32.102 "enable_zerocopy_send_client": false, 00:04:32.102 "zerocopy_threshold": 0, 00:04:32.102 "tls_version": 0, 00:04:32.102 "enable_ktls": false 00:04:32.102 } 00:04:32.102 } 00:04:32.102 ] 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "subsystem": "vmd", 00:04:32.102 "config": [] 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "subsystem": "accel", 00:04:32.102 "config": [ 00:04:32.102 { 00:04:32.102 "method": "accel_set_options", 00:04:32.102 "params": { 00:04:32.102 "small_cache_size": 128, 00:04:32.102 "large_cache_size": 16, 00:04:32.102 "task_count": 2048, 00:04:32.102 "sequence_count": 2048, 00:04:32.102 "buf_count": 2048 00:04:32.102 } 00:04:32.102 } 00:04:32.102 ] 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "subsystem": "bdev", 00:04:32.102 "config": [ 00:04:32.102 { 00:04:32.102 "method": "bdev_set_options", 00:04:32.102 "params": { 00:04:32.102 "bdev_io_pool_size": 65535, 00:04:32.102 "bdev_io_cache_size": 256, 00:04:32.102 "bdev_auto_examine": true, 00:04:32.102 "iobuf_small_cache_size": 128, 00:04:32.102 "iobuf_large_cache_size": 16 00:04:32.102 } 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "method": "bdev_raid_set_options", 00:04:32.102 "params": { 00:04:32.102 "process_window_size_kb": 1024 00:04:32.102 } 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "method": "bdev_iscsi_set_options", 00:04:32.102 "params": { 00:04:32.102 "timeout_sec": 30 00:04:32.102 } 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "method": "bdev_nvme_set_options", 00:04:32.102 "params": { 00:04:32.102 "action_on_timeout": "none", 00:04:32.102 "timeout_us": 0, 00:04:32.102 "timeout_admin_us": 0, 00:04:32.102 "keep_alive_timeout_ms": 10000, 00:04:32.102 "arbitration_burst": 0, 00:04:32.102 "low_priority_weight": 0, 00:04:32.102 "medium_priority_weight": 0, 00:04:32.102 "high_priority_weight": 0, 00:04:32.102 "nvme_adminq_poll_period_us": 10000, 00:04:32.102 "nvme_ioq_poll_period_us": 0, 00:04:32.102 "io_queue_requests": 0, 00:04:32.102 "delay_cmd_submit": true, 00:04:32.102 "transport_retry_count": 4, 00:04:32.102 "bdev_retry_count": 3, 00:04:32.102 "transport_ack_timeout": 0, 00:04:32.102 "ctrlr_loss_timeout_sec": 0, 00:04:32.102 "reconnect_delay_sec": 0, 00:04:32.102 "fast_io_fail_timeout_sec": 0, 00:04:32.102 "disable_auto_failback": false, 00:04:32.102 "generate_uuids": false, 00:04:32.102 "transport_tos": 0, 00:04:32.102 "nvme_error_stat": false, 00:04:32.102 "rdma_srq_size": 0, 00:04:32.102 "io_path_stat": false, 00:04:32.102 "allow_accel_sequence": false, 00:04:32.102 "rdma_max_cq_size": 0, 00:04:32.102 "rdma_cm_event_timeout_ms": 0, 00:04:32.102 "dhchap_digests": [ 00:04:32.102 "sha256", 00:04:32.102 "sha384", 00:04:32.102 "sha512" 00:04:32.102 ], 00:04:32.102 "dhchap_dhgroups": [ 00:04:32.102 "null", 00:04:32.102 "ffdhe2048", 00:04:32.102 "ffdhe3072", 00:04:32.102 "ffdhe4096", 00:04:32.102 "ffdhe6144", 00:04:32.102 "ffdhe8192" 00:04:32.102 ] 00:04:32.102 } 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "method": "bdev_nvme_set_hotplug", 00:04:32.102 "params": { 00:04:32.102 "period_us": 100000, 00:04:32.102 "enable": false 00:04:32.102 } 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "method": "bdev_wait_for_examine" 00:04:32.102 } 00:04:32.102 ] 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "subsystem": "scsi", 00:04:32.102 "config": null 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "subsystem": "scheduler", 00:04:32.102 "config": [ 00:04:32.102 { 00:04:32.102 "method": "framework_set_scheduler", 00:04:32.102 "params": { 00:04:32.102 "name": "static" 00:04:32.102 } 00:04:32.102 } 00:04:32.102 ] 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "subsystem": "vhost_scsi", 00:04:32.102 "config": [] 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "subsystem": "vhost_blk", 00:04:32.102 "config": [] 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "subsystem": "ublk", 00:04:32.102 "config": [] 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "subsystem": "nbd", 00:04:32.102 "config": [] 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "subsystem": "nvmf", 00:04:32.102 "config": [ 00:04:32.102 { 00:04:32.102 "method": "nvmf_set_config", 00:04:32.102 "params": { 00:04:32.102 "discovery_filter": "match_any", 00:04:32.102 "admin_cmd_passthru": { 00:04:32.102 "identify_ctrlr": false 00:04:32.102 } 00:04:32.102 } 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "method": "nvmf_set_max_subsystems", 00:04:32.102 "params": { 00:04:32.102 "max_subsystems": 1024 00:04:32.102 } 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "method": "nvmf_set_crdt", 00:04:32.102 "params": { 00:04:32.102 "crdt1": 0, 00:04:32.102 "crdt2": 0, 00:04:32.102 "crdt3": 0 00:04:32.102 } 00:04:32.102 }, 00:04:32.102 { 00:04:32.102 "method": "nvmf_create_transport", 00:04:32.102 "params": { 00:04:32.102 "trtype": "TCP", 00:04:32.102 "max_queue_depth": 128, 00:04:32.102 "max_io_qpairs_per_ctrlr": 127, 00:04:32.102 "in_capsule_data_size": 4096, 00:04:32.102 "max_io_size": 131072, 00:04:32.102 "io_unit_size": 131072, 00:04:32.102 "max_aq_depth": 128, 00:04:32.102 "num_shared_buffers": 511, 00:04:32.102 "buf_cache_size": 4294967295, 00:04:32.102 "dif_insert_or_strip": false, 00:04:32.102 "zcopy": false, 00:04:32.102 "c2h_success": true, 00:04:32.102 "sock_priority": 0, 00:04:32.102 "abort_timeout_sec": 1, 00:04:32.102 "ack_timeout": 0, 00:04:32.103 "data_wr_pool_size": 0 00:04:32.103 } 00:04:32.103 } 00:04:32.103 ] 00:04:32.103 }, 00:04:32.103 { 00:04:32.103 "subsystem": "iscsi", 00:04:32.103 "config": [ 00:04:32.103 { 00:04:32.103 "method": "iscsi_set_options", 00:04:32.103 "params": { 00:04:32.103 "node_base": "iqn.2016-06.io.spdk", 00:04:32.103 "max_sessions": 128, 00:04:32.103 "max_connections_per_session": 2, 00:04:32.103 "max_queue_depth": 64, 00:04:32.103 "default_time2wait": 2, 00:04:32.103 "default_time2retain": 20, 00:04:32.103 "first_burst_length": 8192, 00:04:32.103 "immediate_data": true, 00:04:32.103 "allow_duplicated_isid": false, 00:04:32.103 "error_recovery_level": 0, 00:04:32.103 "nop_timeout": 60, 00:04:32.103 "nop_in_interval": 30, 00:04:32.103 "disable_chap": false, 00:04:32.103 "require_chap": false, 00:04:32.103 "mutual_chap": false, 00:04:32.103 "chap_group": 0, 00:04:32.103 "max_large_datain_per_connection": 64, 00:04:32.103 "max_r2t_per_connection": 4, 00:04:32.103 "pdu_pool_size": 36864, 00:04:32.103 "immediate_data_pool_size": 16384, 00:04:32.103 "data_out_pool_size": 2048 00:04:32.103 } 00:04:32.103 } 00:04:32.103 ] 00:04:32.103 } 00:04:32.103 ] 00:04:32.103 } 00:04:32.103 12:28:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:32.103 12:28:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59156 00:04:32.103 12:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59156 ']' 00:04:32.103 12:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59156 00:04:32.103 12:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:32.103 12:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:32.103 12:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59156 00:04:32.103 killing process with pid 59156 00:04:32.103 12:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.103 12:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.103 12:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59156' 00:04:32.103 12:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59156 00:04:32.103 12:28:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59156 00:04:32.668 12:28:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59189 00:04:32.668 12:28:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:32.668 12:28:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:37.954 12:29:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59189 00:04:37.954 12:29:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59189 ']' 00:04:37.954 12:29:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59189 00:04:37.954 12:29:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:37.954 12:29:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:37.954 12:29:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59189 00:04:37.954 12:29:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:37.954 12:29:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:37.954 killing process with pid 59189 00:04:37.954 12:29:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59189' 00:04:37.954 12:29:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59189 00:04:37.954 12:29:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59189 00:04:38.211 12:29:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.211 12:29:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:38.211 00:04:38.211 real 0m7.470s 00:04:38.211 user 0m6.988s 00:04:38.211 sys 0m0.890s 00:04:38.211 ************************************ 00:04:38.211 END TEST skip_rpc_with_json 00:04:38.211 ************************************ 00:04:38.211 12:29:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.211 12:29:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.469 12:29:04 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.469 12:29:04 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:38.469 12:29:04 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.469 12:29:04 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.469 12:29:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.469 ************************************ 00:04:38.469 START TEST skip_rpc_with_delay 00:04:38.469 ************************************ 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:38.469 [2024-07-12 12:29:04.401656] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:38.469 [2024-07-12 12:29:04.401799] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:38.469 00:04:38.469 real 0m0.094s 00:04:38.469 user 0m0.064s 00:04:38.469 sys 0m0.029s 00:04:38.469 ************************************ 00:04:38.469 END TEST skip_rpc_with_delay 00:04:38.469 ************************************ 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.469 12:29:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:38.469 12:29:04 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:38.469 12:29:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:38.469 12:29:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:38.469 12:29:04 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:38.469 12:29:04 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.469 12:29:04 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.469 12:29:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.469 ************************************ 00:04:38.469 START TEST exit_on_failed_rpc_init 00:04:38.469 ************************************ 00:04:38.469 12:29:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:38.469 12:29:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59293 00:04:38.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.469 12:29:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59293 00:04:38.469 12:29:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.469 12:29:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59293 ']' 00:04:38.469 12:29:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.469 12:29:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.469 12:29:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.469 12:29:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.469 12:29:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.727 [2024-07-12 12:29:04.549662] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:04:38.727 [2024-07-12 12:29:04.549768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59293 ] 00:04:38.727 [2024-07-12 12:29:04.686406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.984 [2024-07-12 12:29:04.809490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.984 [2024-07-12 12:29:04.890315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:39.556 12:29:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.556 12:29:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:39.556 12:29:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.556 12:29:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.556 12:29:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:39.557 12:29:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.557 12:29:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.557 12:29:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.557 12:29:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.557 12:29:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.557 12:29:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.557 12:29:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.557 12:29:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:39.557 12:29:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:39.557 12:29:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:39.815 [2024-07-12 12:29:05.639436] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:04:39.815 [2024-07-12 12:29:05.639559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59311 ] 00:04:39.815 [2024-07-12 12:29:05.780029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.074 [2024-07-12 12:29:05.943493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.074 [2024-07-12 12:29:05.943592] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:40.074 [2024-07-12 12:29:05.943611] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:40.074 [2024-07-12 12:29:05.943623] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59293 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59293 ']' 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59293 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59293 00:04:40.074 killing process with pid 59293 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59293' 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59293 00:04:40.074 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59293 00:04:40.641 ************************************ 00:04:40.641 END TEST exit_on_failed_rpc_init 00:04:40.641 ************************************ 00:04:40.641 00:04:40.641 real 0m2.155s 00:04:40.641 user 0m2.497s 00:04:40.641 sys 0m0.545s 00:04:40.641 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.641 12:29:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.641 12:29:06 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:40.641 12:29:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.641 00:04:40.641 real 0m15.697s 00:04:40.641 user 0m14.830s 00:04:40.641 sys 0m2.040s 00:04:40.641 ************************************ 00:04:40.641 END TEST skip_rpc 00:04:40.641 ************************************ 00:04:40.641 12:29:06 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.641 12:29:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.900 12:29:06 -- common/autotest_common.sh@1142 -- # return 0 00:04:40.900 12:29:06 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:40.900 12:29:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.900 12:29:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.900 12:29:06 -- common/autotest_common.sh@10 -- # set +x 00:04:40.900 ************************************ 00:04:40.900 START TEST rpc_client 00:04:40.900 ************************************ 00:04:40.900 12:29:06 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:40.900 * Looking for test storage... 00:04:40.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:40.900 12:29:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:40.900 OK 00:04:40.900 12:29:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:40.900 00:04:40.900 real 0m0.107s 00:04:40.900 user 0m0.057s 00:04:40.900 sys 0m0.056s 00:04:40.900 ************************************ 00:04:40.900 END TEST rpc_client 00:04:40.900 ************************************ 00:04:40.900 12:29:06 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.900 12:29:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:40.900 12:29:06 -- common/autotest_common.sh@1142 -- # return 0 00:04:40.900 12:29:06 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:40.900 12:29:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.900 12:29:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.900 12:29:06 -- common/autotest_common.sh@10 -- # set +x 00:04:40.900 ************************************ 00:04:40.900 START TEST json_config 00:04:40.900 ************************************ 00:04:40.900 12:29:06 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:40.900 12:29:06 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:40.900 12:29:06 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.900 12:29:06 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.900 12:29:06 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.900 12:29:06 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.900 12:29:06 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.900 12:29:06 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.900 12:29:06 json_config -- paths/export.sh@5 -- # export PATH 00:04:40.900 12:29:06 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@47 -- # : 0 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:40.900 12:29:06 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:41.158 12:29:06 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:41.158 12:29:06 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:41.158 12:29:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:41.158 12:29:06 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:41.158 12:29:06 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:41.158 12:29:06 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:41.158 12:29:06 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:41.158 12:29:06 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:41.158 12:29:06 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:41.158 12:29:06 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:41.159 12:29:06 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:41.159 12:29:06 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:41.159 INFO: JSON configuration test init 00:04:41.159 12:29:06 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:41.159 12:29:06 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:41.159 12:29:06 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.159 12:29:06 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:41.159 12:29:06 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:41.159 12:29:06 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:41.159 12:29:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.159 12:29:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.159 12:29:06 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:41.159 12:29:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:41.159 12:29:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.159 12:29:06 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:41.159 12:29:06 json_config -- json_config/common.sh@9 -- # local app=target 00:04:41.159 12:29:06 json_config -- json_config/common.sh@10 -- # shift 00:04:41.159 12:29:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.159 12:29:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.159 12:29:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.159 12:29:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.159 12:29:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.159 12:29:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59440 00:04:41.159 12:29:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.159 Waiting for target to run... 00:04:41.159 12:29:06 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:41.159 12:29:06 json_config -- json_config/common.sh@25 -- # waitforlisten 59440 /var/tmp/spdk_tgt.sock 00:04:41.159 12:29:06 json_config -- common/autotest_common.sh@829 -- # '[' -z 59440 ']' 00:04:41.159 12:29:06 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.159 12:29:06 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.159 12:29:06 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.159 12:29:06 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.159 12:29:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.159 [2024-07-12 12:29:07.068438] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:04:41.159 [2024-07-12 12:29:07.068547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59440 ] 00:04:41.732 [2024-07-12 12:29:07.593294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.732 [2024-07-12 12:29:07.709087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.299 00:04:42.299 12:29:08 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.299 12:29:08 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:42.299 12:29:08 json_config -- json_config/common.sh@26 -- # echo '' 00:04:42.299 12:29:08 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:42.299 12:29:08 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:42.299 12:29:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.299 12:29:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.299 12:29:08 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:42.299 12:29:08 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:42.299 12:29:08 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:42.299 12:29:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.299 12:29:08 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:42.299 12:29:08 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:42.299 12:29:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:42.557 [2024-07-12 12:29:08.421609] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:42.815 12:29:08 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:42.815 12:29:08 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:42.815 12:29:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:42.815 12:29:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.815 12:29:08 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:42.815 12:29:08 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:42.815 12:29:08 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:42.815 12:29:08 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:42.815 12:29:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:42.815 12:29:08 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:43.074 12:29:08 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:43.074 12:29:08 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:43.074 12:29:08 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:43.074 12:29:08 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:43.074 12:29:08 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:43.074 12:29:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.074 12:29:08 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:43.074 12:29:08 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:43.074 12:29:08 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:43.074 12:29:08 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:43.074 12:29:08 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:43.074 12:29:08 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:43.074 12:29:08 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:43.074 12:29:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:43.074 12:29:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.074 12:29:08 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:43.074 12:29:08 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:43.074 12:29:08 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:43.074 12:29:08 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:43.074 12:29:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:43.333 MallocForNvmf0 00:04:43.333 12:29:09 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:43.333 12:29:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:43.671 MallocForNvmf1 00:04:43.671 12:29:09 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:43.671 12:29:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:43.671 [2024-07-12 12:29:09.743852] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.930 12:29:09 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:43.931 12:29:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:43.931 12:29:09 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:43.931 12:29:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:44.189 12:29:10 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:44.189 12:29:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:44.757 12:29:10 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:44.757 12:29:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:44.757 [2024-07-12 12:29:10.732520] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:44.757 12:29:10 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:44.757 12:29:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:44.757 12:29:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.757 12:29:10 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:44.757 12:29:10 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:44.757 12:29:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.757 12:29:10 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:44.757 12:29:10 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:44.757 12:29:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:45.015 MallocBdevForConfigChangeCheck 00:04:45.274 12:29:11 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:45.274 12:29:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.274 12:29:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.274 12:29:11 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:45.274 12:29:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:45.533 INFO: shutting down applications... 00:04:45.533 12:29:11 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:45.533 12:29:11 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:45.533 12:29:11 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:45.533 12:29:11 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:45.533 12:29:11 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:45.794 Calling clear_iscsi_subsystem 00:04:45.794 Calling clear_nvmf_subsystem 00:04:45.794 Calling clear_nbd_subsystem 00:04:45.794 Calling clear_ublk_subsystem 00:04:45.794 Calling clear_vhost_blk_subsystem 00:04:45.794 Calling clear_vhost_scsi_subsystem 00:04:45.794 Calling clear_bdev_subsystem 00:04:45.794 12:29:11 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:45.794 12:29:11 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:45.794 12:29:11 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:45.794 12:29:11 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:45.794 12:29:11 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:45.794 12:29:11 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:46.361 12:29:12 json_config -- json_config/json_config.sh@345 -- # break 00:04:46.361 12:29:12 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:46.361 12:29:12 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:46.361 12:29:12 json_config -- json_config/common.sh@31 -- # local app=target 00:04:46.361 12:29:12 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:46.361 12:29:12 json_config -- json_config/common.sh@35 -- # [[ -n 59440 ]] 00:04:46.361 12:29:12 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59440 00:04:46.361 12:29:12 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:46.361 12:29:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.361 12:29:12 json_config -- json_config/common.sh@41 -- # kill -0 59440 00:04:46.361 12:29:12 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.928 12:29:12 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.928 SPDK target shutdown done 00:04:46.928 12:29:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.928 12:29:12 json_config -- json_config/common.sh@41 -- # kill -0 59440 00:04:46.928 12:29:12 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:46.928 12:29:12 json_config -- json_config/common.sh@43 -- # break 00:04:46.928 12:29:12 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:46.928 12:29:12 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:46.928 12:29:12 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:46.928 INFO: relaunching applications... 00:04:46.929 12:29:12 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:46.929 12:29:12 json_config -- json_config/common.sh@9 -- # local app=target 00:04:46.929 12:29:12 json_config -- json_config/common.sh@10 -- # shift 00:04:46.929 12:29:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:46.929 12:29:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:46.929 12:29:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:46.929 12:29:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.929 Waiting for target to run... 00:04:46.929 12:29:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:46.929 12:29:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59625 00:04:46.929 12:29:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:46.929 12:29:12 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:46.929 12:29:12 json_config -- json_config/common.sh@25 -- # waitforlisten 59625 /var/tmp/spdk_tgt.sock 00:04:46.929 12:29:12 json_config -- common/autotest_common.sh@829 -- # '[' -z 59625 ']' 00:04:46.929 12:29:12 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:46.929 12:29:12 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.929 12:29:12 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:46.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:46.929 12:29:12 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.929 12:29:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.929 [2024-07-12 12:29:12.825823] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:04:46.929 [2024-07-12 12:29:12.826191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59625 ] 00:04:47.496 [2024-07-12 12:29:13.341194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.496 [2024-07-12 12:29:13.458419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.807 [2024-07-12 12:29:13.585061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:47.807 [2024-07-12 12:29:13.805783] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.807 [2024-07-12 12:29:13.837870] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:47.807 00:04:47.807 INFO: Checking if target configuration is the same... 00:04:47.807 12:29:13 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.807 12:29:13 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:47.807 12:29:13 json_config -- json_config/common.sh@26 -- # echo '' 00:04:47.807 12:29:13 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:47.807 12:29:13 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:47.807 12:29:13 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:47.807 12:29:13 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:47.807 12:29:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.807 + '[' 2 -ne 2 ']' 00:04:47.807 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:48.065 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:48.065 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:48.065 +++ basename /dev/fd/62 00:04:48.065 ++ mktemp /tmp/62.XXX 00:04:48.065 + tmp_file_1=/tmp/62.HZx 00:04:48.065 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.065 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:48.065 + tmp_file_2=/tmp/spdk_tgt_config.json.bXU 00:04:48.065 + ret=0 00:04:48.065 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:48.323 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:48.323 + diff -u /tmp/62.HZx /tmp/spdk_tgt_config.json.bXU 00:04:48.323 INFO: JSON config files are the same 00:04:48.323 + echo 'INFO: JSON config files are the same' 00:04:48.323 + rm /tmp/62.HZx /tmp/spdk_tgt_config.json.bXU 00:04:48.323 + exit 0 00:04:48.323 INFO: changing configuration and checking if this can be detected... 00:04:48.323 12:29:14 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:48.323 12:29:14 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:48.323 12:29:14 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:48.323 12:29:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:48.581 12:29:14 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.581 12:29:14 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:48.581 12:29:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.581 + '[' 2 -ne 2 ']' 00:04:48.581 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:48.581 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:48.581 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:48.581 +++ basename /dev/fd/62 00:04:48.581 ++ mktemp /tmp/62.XXX 00:04:48.581 + tmp_file_1=/tmp/62.iVn 00:04:48.581 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:48.581 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:48.581 + tmp_file_2=/tmp/spdk_tgt_config.json.T0f 00:04:48.581 + ret=0 00:04:48.581 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.148 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:49.148 + diff -u /tmp/62.iVn /tmp/spdk_tgt_config.json.T0f 00:04:49.148 + ret=1 00:04:49.148 + echo '=== Start of file: /tmp/62.iVn ===' 00:04:49.148 + cat /tmp/62.iVn 00:04:49.148 + echo '=== End of file: /tmp/62.iVn ===' 00:04:49.148 + echo '' 00:04:49.148 + echo '=== Start of file: /tmp/spdk_tgt_config.json.T0f ===' 00:04:49.148 + cat /tmp/spdk_tgt_config.json.T0f 00:04:49.148 + echo '=== End of file: /tmp/spdk_tgt_config.json.T0f ===' 00:04:49.148 + echo '' 00:04:49.148 + rm /tmp/62.iVn /tmp/spdk_tgt_config.json.T0f 00:04:49.148 + exit 1 00:04:49.148 INFO: configuration change detected. 00:04:49.148 12:29:15 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:49.148 12:29:15 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:49.148 12:29:15 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:49.148 12:29:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.148 12:29:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.148 12:29:15 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:49.148 12:29:15 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:49.148 12:29:15 json_config -- json_config/json_config.sh@317 -- # [[ -n 59625 ]] 00:04:49.148 12:29:15 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:49.148 12:29:15 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:49.148 12:29:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.148 12:29:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.148 12:29:15 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:49.148 12:29:15 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:49.148 12:29:15 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:49.148 12:29:15 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:49.148 12:29:15 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:49.148 12:29:15 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:49.148 12:29:15 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.149 12:29:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.149 12:29:15 json_config -- json_config/json_config.sh@323 -- # killprocess 59625 00:04:49.149 12:29:15 json_config -- common/autotest_common.sh@948 -- # '[' -z 59625 ']' 00:04:49.149 12:29:15 json_config -- common/autotest_common.sh@952 -- # kill -0 59625 00:04:49.149 12:29:15 json_config -- common/autotest_common.sh@953 -- # uname 00:04:49.149 12:29:15 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.149 12:29:15 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59625 00:04:49.149 killing process with pid 59625 00:04:49.149 12:29:15 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.149 12:29:15 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.149 12:29:15 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59625' 00:04:49.149 12:29:15 json_config -- common/autotest_common.sh@967 -- # kill 59625 00:04:49.149 12:29:15 json_config -- common/autotest_common.sh@972 -- # wait 59625 00:04:49.715 12:29:15 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.715 12:29:15 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:49.715 12:29:15 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.715 12:29:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.715 12:29:15 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:49.715 INFO: Success 00:04:49.715 12:29:15 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:49.715 ************************************ 00:04:49.715 END TEST json_config 00:04:49.715 ************************************ 00:04:49.715 00:04:49.715 real 0m8.704s 00:04:49.715 user 0m12.352s 00:04:49.715 sys 0m1.978s 00:04:49.715 12:29:15 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.715 12:29:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.715 12:29:15 -- common/autotest_common.sh@1142 -- # return 0 00:04:49.715 12:29:15 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:49.715 12:29:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.715 12:29:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.715 12:29:15 -- common/autotest_common.sh@10 -- # set +x 00:04:49.715 ************************************ 00:04:49.715 START TEST json_config_extra_key 00:04:49.715 ************************************ 00:04:49.715 12:29:15 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:49.715 12:29:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.715 12:29:15 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:49.715 12:29:15 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.715 12:29:15 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.715 12:29:15 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.715 12:29:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.715 12:29:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.715 12:29:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.715 12:29:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:49.716 12:29:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.716 12:29:15 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:49.716 12:29:15 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:49.716 12:29:15 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:49.716 12:29:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.716 12:29:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.716 12:29:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.716 12:29:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:49.716 12:29:15 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:49.716 12:29:15 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:49.716 12:29:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:49.716 12:29:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:49.716 12:29:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:49.716 12:29:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:49.716 12:29:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:49.716 12:29:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:49.716 12:29:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:49.716 12:29:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:49.716 12:29:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:49.716 12:29:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:49.716 12:29:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:49.716 INFO: launching applications... 00:04:49.716 12:29:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:49.716 12:29:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:49.716 12:29:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:49.716 12:29:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:49.716 12:29:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:49.716 12:29:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:49.716 12:29:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.716 12:29:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.716 12:29:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59771 00:04:49.716 Waiting for target to run... 00:04:49.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:49.716 12:29:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:49.716 12:29:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59771 /var/tmp/spdk_tgt.sock 00:04:49.716 12:29:15 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:49.716 12:29:15 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59771 ']' 00:04:49.716 12:29:15 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:49.716 12:29:15 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.716 12:29:15 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:49.716 12:29:15 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.716 12:29:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:49.974 [2024-07-12 12:29:15.809250] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:04:49.974 [2024-07-12 12:29:15.810065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59771 ] 00:04:50.541 [2024-07-12 12:29:16.342258] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.541 [2024-07-12 12:29:16.464607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.541 [2024-07-12 12:29:16.485283] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:50.800 00:04:50.800 INFO: shutting down applications... 00:04:50.800 12:29:16 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.800 12:29:16 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:50.800 12:29:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:50.800 12:29:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:50.800 12:29:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:50.800 12:29:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:50.800 12:29:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:50.800 12:29:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59771 ]] 00:04:50.800 12:29:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59771 00:04:50.800 12:29:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:50.800 12:29:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.800 12:29:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59771 00:04:50.800 12:29:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.366 12:29:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.366 12:29:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.366 12:29:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59771 00:04:51.366 12:29:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.934 12:29:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.934 12:29:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.934 12:29:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59771 00:04:51.934 12:29:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:51.934 12:29:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:51.934 12:29:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:51.934 SPDK target shutdown done 00:04:51.934 Success 00:04:51.934 12:29:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:51.934 12:29:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:51.934 00:04:51.934 real 0m2.195s 00:04:51.934 user 0m1.735s 00:04:51.934 sys 0m0.554s 00:04:51.934 ************************************ 00:04:51.934 END TEST json_config_extra_key 00:04:51.934 ************************************ 00:04:51.934 12:29:17 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.934 12:29:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.934 12:29:17 -- common/autotest_common.sh@1142 -- # return 0 00:04:51.934 12:29:17 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.934 12:29:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.934 12:29:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.934 12:29:17 -- common/autotest_common.sh@10 -- # set +x 00:04:51.934 ************************************ 00:04:51.934 START TEST alias_rpc 00:04:51.934 ************************************ 00:04:51.934 12:29:17 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.934 * Looking for test storage... 00:04:51.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:51.934 12:29:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:51.934 12:29:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59842 00:04:51.934 12:29:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.934 12:29:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59842 00:04:51.934 12:29:17 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59842 ']' 00:04:51.934 12:29:17 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.934 12:29:17 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.934 12:29:17 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.934 12:29:17 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.934 12:29:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.192 [2024-07-12 12:29:18.050861] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:04:52.192 [2024-07-12 12:29:18.051164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59842 ] 00:04:52.192 [2024-07-12 12:29:18.185674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.450 [2024-07-12 12:29:18.329561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.450 [2024-07-12 12:29:18.405507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:53.017 12:29:19 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.017 12:29:19 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:53.017 12:29:19 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:53.274 12:29:19 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59842 00:04:53.274 12:29:19 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59842 ']' 00:04:53.274 12:29:19 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59842 00:04:53.274 12:29:19 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:53.274 12:29:19 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:53.274 12:29:19 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59842 00:04:53.274 killing process with pid 59842 00:04:53.275 12:29:19 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:53.275 12:29:19 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:53.275 12:29:19 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59842' 00:04:53.275 12:29:19 alias_rpc -- common/autotest_common.sh@967 -- # kill 59842 00:04:53.275 12:29:19 alias_rpc -- common/autotest_common.sh@972 -- # wait 59842 00:04:54.206 ************************************ 00:04:54.206 END TEST alias_rpc 00:04:54.206 ************************************ 00:04:54.206 00:04:54.206 real 0m2.030s 00:04:54.206 user 0m2.199s 00:04:54.206 sys 0m0.531s 00:04:54.206 12:29:19 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.206 12:29:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.206 12:29:19 -- common/autotest_common.sh@1142 -- # return 0 00:04:54.206 12:29:19 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:54.206 12:29:19 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.206 12:29:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.206 12:29:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.206 12:29:19 -- common/autotest_common.sh@10 -- # set +x 00:04:54.206 ************************************ 00:04:54.206 START TEST spdkcli_tcp 00:04:54.206 ************************************ 00:04:54.206 12:29:19 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:54.206 * Looking for test storage... 00:04:54.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:54.206 12:29:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:54.206 12:29:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:54.206 12:29:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:54.206 12:29:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:54.206 12:29:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:54.206 12:29:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:54.206 12:29:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:54.206 12:29:20 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:54.206 12:29:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.206 12:29:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59923 00:04:54.206 12:29:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59923 00:04:54.206 12:29:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:54.206 12:29:20 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 59923 ']' 00:04:54.206 12:29:20 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.206 12:29:20 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.206 12:29:20 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.206 12:29:20 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.206 12:29:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.206 [2024-07-12 12:29:20.131625] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:04:54.206 [2024-07-12 12:29:20.131742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59923 ] 00:04:54.206 [2024-07-12 12:29:20.270869] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.464 [2024-07-12 12:29:20.430888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.464 [2024-07-12 12:29:20.430941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.464 [2024-07-12 12:29:20.512113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:55.469 12:29:21 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.469 12:29:21 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:55.469 12:29:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59941 00:04:55.469 12:29:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:55.469 12:29:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:55.469 [ 00:04:55.469 "bdev_malloc_delete", 00:04:55.469 "bdev_malloc_create", 00:04:55.469 "bdev_null_resize", 00:04:55.469 "bdev_null_delete", 00:04:55.469 "bdev_null_create", 00:04:55.469 "bdev_nvme_cuse_unregister", 00:04:55.469 "bdev_nvme_cuse_register", 00:04:55.469 "bdev_opal_new_user", 00:04:55.469 "bdev_opal_set_lock_state", 00:04:55.469 "bdev_opal_delete", 00:04:55.469 "bdev_opal_get_info", 00:04:55.469 "bdev_opal_create", 00:04:55.469 "bdev_nvme_opal_revert", 00:04:55.469 "bdev_nvme_opal_init", 00:04:55.469 "bdev_nvme_send_cmd", 00:04:55.469 "bdev_nvme_get_path_iostat", 00:04:55.469 "bdev_nvme_get_mdns_discovery_info", 00:04:55.469 "bdev_nvme_stop_mdns_discovery", 00:04:55.469 "bdev_nvme_start_mdns_discovery", 00:04:55.469 "bdev_nvme_set_multipath_policy", 00:04:55.469 "bdev_nvme_set_preferred_path", 00:04:55.469 "bdev_nvme_get_io_paths", 00:04:55.469 "bdev_nvme_remove_error_injection", 00:04:55.469 "bdev_nvme_add_error_injection", 00:04:55.469 "bdev_nvme_get_discovery_info", 00:04:55.469 "bdev_nvme_stop_discovery", 00:04:55.469 "bdev_nvme_start_discovery", 00:04:55.469 "bdev_nvme_get_controller_health_info", 00:04:55.469 "bdev_nvme_disable_controller", 00:04:55.469 "bdev_nvme_enable_controller", 00:04:55.469 "bdev_nvme_reset_controller", 00:04:55.469 "bdev_nvme_get_transport_statistics", 00:04:55.469 "bdev_nvme_apply_firmware", 00:04:55.469 "bdev_nvme_detach_controller", 00:04:55.469 "bdev_nvme_get_controllers", 00:04:55.469 "bdev_nvme_attach_controller", 00:04:55.469 "bdev_nvme_set_hotplug", 00:04:55.469 "bdev_nvme_set_options", 00:04:55.469 "bdev_passthru_delete", 00:04:55.469 "bdev_passthru_create", 00:04:55.469 "bdev_lvol_set_parent_bdev", 00:04:55.469 "bdev_lvol_set_parent", 00:04:55.469 "bdev_lvol_check_shallow_copy", 00:04:55.469 "bdev_lvol_start_shallow_copy", 00:04:55.469 "bdev_lvol_grow_lvstore", 00:04:55.469 "bdev_lvol_get_lvols", 00:04:55.469 "bdev_lvol_get_lvstores", 00:04:55.469 "bdev_lvol_delete", 00:04:55.469 "bdev_lvol_set_read_only", 00:04:55.469 "bdev_lvol_resize", 00:04:55.469 "bdev_lvol_decouple_parent", 00:04:55.469 "bdev_lvol_inflate", 00:04:55.469 "bdev_lvol_rename", 00:04:55.469 "bdev_lvol_clone_bdev", 00:04:55.469 "bdev_lvol_clone", 00:04:55.469 "bdev_lvol_snapshot", 00:04:55.469 "bdev_lvol_create", 00:04:55.469 "bdev_lvol_delete_lvstore", 00:04:55.469 "bdev_lvol_rename_lvstore", 00:04:55.469 "bdev_lvol_create_lvstore", 00:04:55.469 "bdev_raid_set_options", 00:04:55.469 "bdev_raid_remove_base_bdev", 00:04:55.469 "bdev_raid_add_base_bdev", 00:04:55.469 "bdev_raid_delete", 00:04:55.469 "bdev_raid_create", 00:04:55.469 "bdev_raid_get_bdevs", 00:04:55.469 "bdev_error_inject_error", 00:04:55.469 "bdev_error_delete", 00:04:55.469 "bdev_error_create", 00:04:55.469 "bdev_split_delete", 00:04:55.469 "bdev_split_create", 00:04:55.469 "bdev_delay_delete", 00:04:55.469 "bdev_delay_create", 00:04:55.469 "bdev_delay_update_latency", 00:04:55.469 "bdev_zone_block_delete", 00:04:55.469 "bdev_zone_block_create", 00:04:55.469 "blobfs_create", 00:04:55.469 "blobfs_detect", 00:04:55.469 "blobfs_set_cache_size", 00:04:55.469 "bdev_aio_delete", 00:04:55.469 "bdev_aio_rescan", 00:04:55.469 "bdev_aio_create", 00:04:55.469 "bdev_ftl_set_property", 00:04:55.469 "bdev_ftl_get_properties", 00:04:55.469 "bdev_ftl_get_stats", 00:04:55.469 "bdev_ftl_unmap", 00:04:55.469 "bdev_ftl_unload", 00:04:55.469 "bdev_ftl_delete", 00:04:55.469 "bdev_ftl_load", 00:04:55.469 "bdev_ftl_create", 00:04:55.469 "bdev_virtio_attach_controller", 00:04:55.469 "bdev_virtio_scsi_get_devices", 00:04:55.469 "bdev_virtio_detach_controller", 00:04:55.469 "bdev_virtio_blk_set_hotplug", 00:04:55.469 "bdev_iscsi_delete", 00:04:55.469 "bdev_iscsi_create", 00:04:55.469 "bdev_iscsi_set_options", 00:04:55.469 "bdev_uring_delete", 00:04:55.469 "bdev_uring_rescan", 00:04:55.469 "bdev_uring_create", 00:04:55.469 "accel_error_inject_error", 00:04:55.469 "ioat_scan_accel_module", 00:04:55.469 "dsa_scan_accel_module", 00:04:55.469 "iaa_scan_accel_module", 00:04:55.469 "keyring_file_remove_key", 00:04:55.469 "keyring_file_add_key", 00:04:55.469 "keyring_linux_set_options", 00:04:55.469 "iscsi_get_histogram", 00:04:55.469 "iscsi_enable_histogram", 00:04:55.469 "iscsi_set_options", 00:04:55.469 "iscsi_get_auth_groups", 00:04:55.469 "iscsi_auth_group_remove_secret", 00:04:55.469 "iscsi_auth_group_add_secret", 00:04:55.469 "iscsi_delete_auth_group", 00:04:55.469 "iscsi_create_auth_group", 00:04:55.469 "iscsi_set_discovery_auth", 00:04:55.469 "iscsi_get_options", 00:04:55.469 "iscsi_target_node_request_logout", 00:04:55.469 "iscsi_target_node_set_redirect", 00:04:55.469 "iscsi_target_node_set_auth", 00:04:55.469 "iscsi_target_node_add_lun", 00:04:55.469 "iscsi_get_stats", 00:04:55.469 "iscsi_get_connections", 00:04:55.469 "iscsi_portal_group_set_auth", 00:04:55.469 "iscsi_start_portal_group", 00:04:55.469 "iscsi_delete_portal_group", 00:04:55.470 "iscsi_create_portal_group", 00:04:55.470 "iscsi_get_portal_groups", 00:04:55.470 "iscsi_delete_target_node", 00:04:55.470 "iscsi_target_node_remove_pg_ig_maps", 00:04:55.470 "iscsi_target_node_add_pg_ig_maps", 00:04:55.470 "iscsi_create_target_node", 00:04:55.470 "iscsi_get_target_nodes", 00:04:55.470 "iscsi_delete_initiator_group", 00:04:55.470 "iscsi_initiator_group_remove_initiators", 00:04:55.470 "iscsi_initiator_group_add_initiators", 00:04:55.470 "iscsi_create_initiator_group", 00:04:55.470 "iscsi_get_initiator_groups", 00:04:55.470 "nvmf_set_crdt", 00:04:55.470 "nvmf_set_config", 00:04:55.470 "nvmf_set_max_subsystems", 00:04:55.470 "nvmf_stop_mdns_prr", 00:04:55.470 "nvmf_publish_mdns_prr", 00:04:55.470 "nvmf_subsystem_get_listeners", 00:04:55.470 "nvmf_subsystem_get_qpairs", 00:04:55.470 "nvmf_subsystem_get_controllers", 00:04:55.470 "nvmf_get_stats", 00:04:55.470 "nvmf_get_transports", 00:04:55.470 "nvmf_create_transport", 00:04:55.470 "nvmf_get_targets", 00:04:55.470 "nvmf_delete_target", 00:04:55.470 "nvmf_create_target", 00:04:55.470 "nvmf_subsystem_allow_any_host", 00:04:55.470 "nvmf_subsystem_remove_host", 00:04:55.470 "nvmf_subsystem_add_host", 00:04:55.470 "nvmf_ns_remove_host", 00:04:55.470 "nvmf_ns_add_host", 00:04:55.470 "nvmf_subsystem_remove_ns", 00:04:55.470 "nvmf_subsystem_add_ns", 00:04:55.470 "nvmf_subsystem_listener_set_ana_state", 00:04:55.470 "nvmf_discovery_get_referrals", 00:04:55.470 "nvmf_discovery_remove_referral", 00:04:55.470 "nvmf_discovery_add_referral", 00:04:55.470 "nvmf_subsystem_remove_listener", 00:04:55.470 "nvmf_subsystem_add_listener", 00:04:55.470 "nvmf_delete_subsystem", 00:04:55.470 "nvmf_create_subsystem", 00:04:55.470 "nvmf_get_subsystems", 00:04:55.470 "env_dpdk_get_mem_stats", 00:04:55.470 "nbd_get_disks", 00:04:55.470 "nbd_stop_disk", 00:04:55.470 "nbd_start_disk", 00:04:55.470 "ublk_recover_disk", 00:04:55.470 "ublk_get_disks", 00:04:55.470 "ublk_stop_disk", 00:04:55.470 "ublk_start_disk", 00:04:55.470 "ublk_destroy_target", 00:04:55.470 "ublk_create_target", 00:04:55.470 "virtio_blk_create_transport", 00:04:55.470 "virtio_blk_get_transports", 00:04:55.470 "vhost_controller_set_coalescing", 00:04:55.470 "vhost_get_controllers", 00:04:55.470 "vhost_delete_controller", 00:04:55.470 "vhost_create_blk_controller", 00:04:55.470 "vhost_scsi_controller_remove_target", 00:04:55.470 "vhost_scsi_controller_add_target", 00:04:55.470 "vhost_start_scsi_controller", 00:04:55.470 "vhost_create_scsi_controller", 00:04:55.470 "thread_set_cpumask", 00:04:55.470 "framework_get_governor", 00:04:55.470 "framework_get_scheduler", 00:04:55.470 "framework_set_scheduler", 00:04:55.470 "framework_get_reactors", 00:04:55.470 "thread_get_io_channels", 00:04:55.470 "thread_get_pollers", 00:04:55.470 "thread_get_stats", 00:04:55.470 "framework_monitor_context_switch", 00:04:55.470 "spdk_kill_instance", 00:04:55.470 "log_enable_timestamps", 00:04:55.470 "log_get_flags", 00:04:55.470 "log_clear_flag", 00:04:55.470 "log_set_flag", 00:04:55.470 "log_get_level", 00:04:55.470 "log_set_level", 00:04:55.470 "log_get_print_level", 00:04:55.470 "log_set_print_level", 00:04:55.470 "framework_enable_cpumask_locks", 00:04:55.470 "framework_disable_cpumask_locks", 00:04:55.470 "framework_wait_init", 00:04:55.470 "framework_start_init", 00:04:55.470 "scsi_get_devices", 00:04:55.470 "bdev_get_histogram", 00:04:55.470 "bdev_enable_histogram", 00:04:55.470 "bdev_set_qos_limit", 00:04:55.470 "bdev_set_qd_sampling_period", 00:04:55.470 "bdev_get_bdevs", 00:04:55.470 "bdev_reset_iostat", 00:04:55.470 "bdev_get_iostat", 00:04:55.470 "bdev_examine", 00:04:55.470 "bdev_wait_for_examine", 00:04:55.470 "bdev_set_options", 00:04:55.470 "notify_get_notifications", 00:04:55.470 "notify_get_types", 00:04:55.470 "accel_get_stats", 00:04:55.470 "accel_set_options", 00:04:55.470 "accel_set_driver", 00:04:55.470 "accel_crypto_key_destroy", 00:04:55.470 "accel_crypto_keys_get", 00:04:55.470 "accel_crypto_key_create", 00:04:55.470 "accel_assign_opc", 00:04:55.470 "accel_get_module_info", 00:04:55.470 "accel_get_opc_assignments", 00:04:55.470 "vmd_rescan", 00:04:55.470 "vmd_remove_device", 00:04:55.470 "vmd_enable", 00:04:55.470 "sock_get_default_impl", 00:04:55.470 "sock_set_default_impl", 00:04:55.470 "sock_impl_set_options", 00:04:55.470 "sock_impl_get_options", 00:04:55.470 "iobuf_get_stats", 00:04:55.470 "iobuf_set_options", 00:04:55.470 "framework_get_pci_devices", 00:04:55.470 "framework_get_config", 00:04:55.470 "framework_get_subsystems", 00:04:55.470 "trace_get_info", 00:04:55.470 "trace_get_tpoint_group_mask", 00:04:55.470 "trace_disable_tpoint_group", 00:04:55.470 "trace_enable_tpoint_group", 00:04:55.470 "trace_clear_tpoint_mask", 00:04:55.470 "trace_set_tpoint_mask", 00:04:55.470 "keyring_get_keys", 00:04:55.470 "spdk_get_version", 00:04:55.470 "rpc_get_methods" 00:04:55.470 ] 00:04:55.470 12:29:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:55.470 12:29:21 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:55.470 12:29:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.470 12:29:21 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:55.470 12:29:21 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59923 00:04:55.470 12:29:21 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 59923 ']' 00:04:55.470 12:29:21 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 59923 00:04:55.470 12:29:21 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:55.470 12:29:21 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:55.470 12:29:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59923 00:04:55.470 killing process with pid 59923 00:04:55.470 12:29:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:55.470 12:29:21 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:55.470 12:29:21 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59923' 00:04:55.470 12:29:21 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 59923 00:04:55.470 12:29:21 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 59923 00:04:56.034 ************************************ 00:04:56.034 END TEST spdkcli_tcp 00:04:56.034 ************************************ 00:04:56.034 00:04:56.034 real 0m2.030s 00:04:56.034 user 0m3.624s 00:04:56.034 sys 0m0.566s 00:04:56.034 12:29:22 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.034 12:29:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.034 12:29:22 -- common/autotest_common.sh@1142 -- # return 0 00:04:56.034 12:29:22 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.034 12:29:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.034 12:29:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.034 12:29:22 -- common/autotest_common.sh@10 -- # set +x 00:04:56.034 ************************************ 00:04:56.034 START TEST dpdk_mem_utility 00:04:56.034 ************************************ 00:04:56.034 12:29:22 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:56.293 * Looking for test storage... 00:04:56.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:56.293 12:29:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:56.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.293 12:29:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60015 00:04:56.293 12:29:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60015 00:04:56.293 12:29:22 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 60015 ']' 00:04:56.293 12:29:22 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.293 12:29:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.293 12:29:22 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.293 12:29:22 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.293 12:29:22 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.293 12:29:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.293 [2024-07-12 12:29:22.208221] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:04:56.293 [2024-07-12 12:29:22.208336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60015 ] 00:04:56.293 [2024-07-12 12:29:22.346173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.551 [2024-07-12 12:29:22.496348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.551 [2024-07-12 12:29:22.576321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:04:57.487 12:29:23 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.487 12:29:23 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:57.487 12:29:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:57.487 12:29:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:57.487 12:29:23 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:57.487 12:29:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.487 { 00:04:57.487 "filename": "/tmp/spdk_mem_dump.txt" 00:04:57.487 } 00:04:57.487 12:29:23 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:57.487 12:29:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:57.487 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:57.487 1 heaps totaling size 814.000000 MiB 00:04:57.487 size: 814.000000 MiB heap id: 0 00:04:57.487 end heaps---------- 00:04:57.487 8 mempools totaling size 598.116089 MiB 00:04:57.487 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:57.487 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:57.487 size: 84.521057 MiB name: bdev_io_60015 00:04:57.487 size: 51.011292 MiB name: evtpool_60015 00:04:57.487 size: 50.003479 MiB name: msgpool_60015 00:04:57.487 size: 21.763794 MiB name: PDU_Pool 00:04:57.487 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:57.487 size: 0.026123 MiB name: Session_Pool 00:04:57.487 end mempools------- 00:04:57.487 6 memzones totaling size 4.142822 MiB 00:04:57.487 size: 1.000366 MiB name: RG_ring_0_60015 00:04:57.487 size: 1.000366 MiB name: RG_ring_1_60015 00:04:57.487 size: 1.000366 MiB name: RG_ring_4_60015 00:04:57.487 size: 1.000366 MiB name: RG_ring_5_60015 00:04:57.487 size: 0.125366 MiB name: RG_ring_2_60015 00:04:57.487 size: 0.015991 MiB name: RG_ring_3_60015 00:04:57.487 end memzones------- 00:04:57.487 12:29:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:57.487 heap id: 0 total size: 814.000000 MiB number of busy elements: 301 number of free elements: 15 00:04:57.487 list of free elements. size: 12.471741 MiB 00:04:57.487 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:57.487 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:57.487 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:57.487 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:57.487 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:57.487 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:57.487 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:57.487 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:57.487 element at address: 0x200000200000 with size: 0.833191 MiB 00:04:57.487 element at address: 0x20001aa00000 with size: 0.568970 MiB 00:04:57.487 element at address: 0x20000b200000 with size: 0.488892 MiB 00:04:57.487 element at address: 0x200000800000 with size: 0.486145 MiB 00:04:57.487 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:57.487 element at address: 0x200027e00000 with size: 0.395935 MiB 00:04:57.487 element at address: 0x200003a00000 with size: 0.347839 MiB 00:04:57.487 list of standard malloc elements. size: 199.265686 MiB 00:04:57.487 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:57.487 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:57.487 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:57.487 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:57.487 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:57.487 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:57.487 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:57.487 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:57.487 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:57.487 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:57.487 element at address: 0x20000087c740 with size: 0.000183 MiB 00:04:57.487 element at address: 0x20000087c800 with size: 0.000183 MiB 00:04:57.487 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x20000087c980 with size: 0.000183 MiB 00:04:57.487 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:57.487 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:57.487 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:57.487 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:57.487 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:57.487 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a59180 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a59240 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a59300 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a59480 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a59540 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a59600 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a59780 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a59840 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a59900 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:04:57.487 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:57.488 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:57.488 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:57.488 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:57.488 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e65680 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:57.488 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:57.489 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:57.489 list of memzone associated elements. size: 602.262573 MiB 00:04:57.489 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:57.489 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:57.489 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:57.489 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:57.489 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:57.489 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_60015_0 00:04:57.489 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:57.489 associated memzone info: size: 48.002930 MiB name: MP_evtpool_60015_0 00:04:57.489 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:57.489 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60015_0 00:04:57.489 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:57.489 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:57.489 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:57.489 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:57.489 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:57.489 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_60015 00:04:57.489 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:57.489 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60015 00:04:57.489 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:57.489 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60015 00:04:57.489 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:57.489 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:57.489 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:57.489 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:57.489 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:57.489 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:57.489 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:57.489 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:57.489 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:57.489 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60015 00:04:57.489 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:57.489 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60015 00:04:57.489 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:57.489 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60015 00:04:57.489 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:57.489 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60015 00:04:57.489 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:57.489 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60015 00:04:57.489 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:57.489 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:57.489 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:57.489 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:57.489 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:57.489 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:57.489 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:57.489 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60015 00:04:57.489 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:57.489 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:57.489 element at address: 0x200027e65740 with size: 0.023743 MiB 00:04:57.489 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:57.489 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:57.489 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60015 00:04:57.489 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:04:57.489 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:57.489 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:04:57.489 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60015 00:04:57.489 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:57.489 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60015 00:04:57.489 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:04:57.489 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:57.489 12:29:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:57.489 12:29:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60015 00:04:57.489 12:29:23 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 60015 ']' 00:04:57.489 12:29:23 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 60015 00:04:57.489 12:29:23 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:57.489 12:29:23 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:57.489 12:29:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60015 00:04:57.489 killing process with pid 60015 00:04:57.489 12:29:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:57.489 12:29:23 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:57.489 12:29:23 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60015' 00:04:57.489 12:29:23 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 60015 00:04:57.489 12:29:23 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 60015 00:04:58.055 00:04:58.055 real 0m1.952s 00:04:58.055 user 0m2.055s 00:04:58.055 sys 0m0.518s 00:04:58.055 12:29:24 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.055 12:29:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.055 ************************************ 00:04:58.055 END TEST dpdk_mem_utility 00:04:58.055 ************************************ 00:04:58.055 12:29:24 -- common/autotest_common.sh@1142 -- # return 0 00:04:58.055 12:29:24 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:58.055 12:29:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.055 12:29:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.055 12:29:24 -- common/autotest_common.sh@10 -- # set +x 00:04:58.055 ************************************ 00:04:58.055 START TEST event 00:04:58.055 ************************************ 00:04:58.055 12:29:24 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:58.314 * Looking for test storage... 00:04:58.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:58.314 12:29:24 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:58.314 12:29:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:58.314 12:29:24 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.314 12:29:24 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:58.314 12:29:24 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.314 12:29:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.314 ************************************ 00:04:58.314 START TEST event_perf 00:04:58.314 ************************************ 00:04:58.314 12:29:24 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:58.314 Running I/O for 1 seconds...[2024-07-12 12:29:24.170487] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:04:58.314 [2024-07-12 12:29:24.170586] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60091 ] 00:04:58.314 [2024-07-12 12:29:24.310975] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.572 [2024-07-12 12:29:24.457325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.572 [2024-07-12 12:29:24.457489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.572 [2024-07-12 12:29:24.457591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.572 [2024-07-12 12:29:24.457598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.527 Running I/O for 1 seconds... 00:04:59.527 lcore 0: 196988 00:04:59.527 lcore 1: 196988 00:04:59.527 lcore 2: 196987 00:04:59.527 lcore 3: 196987 00:04:59.527 done. 00:04:59.527 ************************************ 00:04:59.527 END TEST event_perf 00:04:59.527 ************************************ 00:04:59.527 00:04:59.527 real 0m1.429s 00:04:59.527 user 0m4.217s 00:04:59.527 sys 0m0.081s 00:04:59.527 12:29:25 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.527 12:29:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:59.786 12:29:25 event -- common/autotest_common.sh@1142 -- # return 0 00:04:59.786 12:29:25 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:59.786 12:29:25 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:59.786 12:29:25 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.786 12:29:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.786 ************************************ 00:04:59.786 START TEST event_reactor 00:04:59.786 ************************************ 00:04:59.786 12:29:25 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:59.786 [2024-07-12 12:29:25.650455] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:04:59.786 [2024-07-12 12:29:25.650570] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60125 ] 00:04:59.786 [2024-07-12 12:29:25.786030] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.044 [2024-07-12 12:29:25.933598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.978 test_start 00:05:00.978 oneshot 00:05:00.978 tick 100 00:05:00.978 tick 100 00:05:00.978 tick 250 00:05:00.978 tick 100 00:05:00.978 tick 100 00:05:00.978 tick 250 00:05:00.978 tick 500 00:05:00.978 tick 100 00:05:00.978 tick 100 00:05:00.978 tick 100 00:05:00.978 tick 250 00:05:00.978 tick 100 00:05:00.978 tick 100 00:05:00.978 test_end 00:05:00.978 ************************************ 00:05:00.978 END TEST event_reactor 00:05:00.978 ************************************ 00:05:00.978 00:05:00.978 real 0m1.411s 00:05:00.978 user 0m1.229s 00:05:00.978 sys 0m0.074s 00:05:00.978 12:29:27 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.978 12:29:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:01.236 12:29:27 event -- common/autotest_common.sh@1142 -- # return 0 00:05:01.236 12:29:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.236 12:29:27 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:01.236 12:29:27 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.236 12:29:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.236 ************************************ 00:05:01.236 START TEST event_reactor_perf 00:05:01.236 ************************************ 00:05:01.236 12:29:27 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.236 [2024-07-12 12:29:27.115743] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:01.236 [2024-07-12 12:29:27.115861] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60160 ] 00:05:01.236 [2024-07-12 12:29:27.249645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.495 [2024-07-12 12:29:27.361545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.430 test_start 00:05:02.430 test_end 00:05:02.430 Performance: 371910 events per second 00:05:02.430 ************************************ 00:05:02.430 END TEST event_reactor_perf 00:05:02.430 ************************************ 00:05:02.430 00:05:02.430 real 0m1.378s 00:05:02.430 user 0m1.207s 00:05:02.430 sys 0m0.065s 00:05:02.430 12:29:28 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.430 12:29:28 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.688 12:29:28 event -- common/autotest_common.sh@1142 -- # return 0 00:05:02.688 12:29:28 event -- event/event.sh@49 -- # uname -s 00:05:02.688 12:29:28 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:02.688 12:29:28 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:02.688 12:29:28 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:02.688 12:29:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.688 12:29:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.688 ************************************ 00:05:02.688 START TEST event_scheduler 00:05:02.688 ************************************ 00:05:02.688 12:29:28 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:02.688 * Looking for test storage... 00:05:02.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:02.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.688 12:29:28 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:02.688 12:29:28 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60222 00:05:02.688 12:29:28 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.688 12:29:28 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:02.688 12:29:28 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60222 00:05:02.688 12:29:28 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60222 ']' 00:05:02.688 12:29:28 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.688 12:29:28 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:02.688 12:29:28 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.688 12:29:28 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:02.688 12:29:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.688 [2024-07-12 12:29:28.658822] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:02.688 [2024-07-12 12:29:28.658936] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60222 ] 00:05:02.996 [2024-07-12 12:29:28.790204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.996 [2024-07-12 12:29:28.941310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.996 [2024-07-12 12:29:28.941423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.996 [2024-07-12 12:29:28.941527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:02.996 [2024-07-12 12:29:28.941532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.932 12:29:29 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.932 12:29:29 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:03.932 12:29:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:03.932 12:29:29 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.932 12:29:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.932 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.932 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.932 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.932 POWER: Cannot set governor of lcore 0 to performance 00:05:03.932 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.932 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.932 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.932 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.932 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:03.932 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:03.932 POWER: Unable to set Power Management Environment for lcore 0 00:05:03.932 [2024-07-12 12:29:29.681098] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:03.932 [2024-07-12 12:29:29.681200] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:03.933 [2024-07-12 12:29:29.681244] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:03.933 [2024-07-12 12:29:29.681337] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:03.933 [2024-07-12 12:29:29.681379] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:03.933 [2024-07-12 12:29:29.681491] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:03.933 12:29:29 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.933 12:29:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:03.933 12:29:29 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.933 12:29:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.933 [2024-07-12 12:29:29.767351] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:03.933 [2024-07-12 12:29:29.813269] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:03.933 12:29:29 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.933 12:29:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:03.933 12:29:29 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.933 12:29:29 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.933 12:29:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.933 ************************************ 00:05:03.933 START TEST scheduler_create_thread 00:05:03.933 ************************************ 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.933 2 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.933 3 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.933 4 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.933 5 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.933 6 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.933 7 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.933 8 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.933 9 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.933 10 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:03.933 12:29:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.308 ************************************ 00:05:05.308 END TEST scheduler_create_thread 00:05:05.308 ************************************ 00:05:05.308 12:29:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:05.308 00:05:05.308 real 0m1.170s 00:05:05.308 user 0m0.015s 00:05:05.308 sys 0m0.004s 00:05:05.308 12:29:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.308 12:29:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.308 12:29:31 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:05.308 12:29:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:05.308 12:29:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60222 00:05:05.308 12:29:31 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60222 ']' 00:05:05.308 12:29:31 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60222 00:05:05.308 12:29:31 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:05.308 12:29:31 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.308 12:29:31 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60222 00:05:05.308 killing process with pid 60222 00:05:05.308 12:29:31 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:05.308 12:29:31 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:05.308 12:29:31 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60222' 00:05:05.308 12:29:31 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60222 00:05:05.308 12:29:31 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60222 00:05:05.566 [2024-07-12 12:29:31.476508] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:05.824 00:05:05.824 real 0m3.297s 00:05:05.824 user 0m5.836s 00:05:05.824 sys 0m0.424s 00:05:05.824 12:29:31 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.824 ************************************ 00:05:05.824 12:29:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.824 END TEST event_scheduler 00:05:05.824 ************************************ 00:05:05.824 12:29:31 event -- common/autotest_common.sh@1142 -- # return 0 00:05:05.824 12:29:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:05.824 12:29:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:05.824 12:29:31 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.824 12:29:31 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.824 12:29:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.824 ************************************ 00:05:05.824 START TEST app_repeat 00:05:05.824 ************************************ 00:05:05.824 12:29:31 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:05.824 12:29:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.824 12:29:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.824 12:29:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:05.824 12:29:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.824 12:29:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:05.824 12:29:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:05.824 12:29:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:05.824 Process app_repeat pid: 60306 00:05:05.824 spdk_app_start Round 0 00:05:05.824 12:29:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60306 00:05:05.824 12:29:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.824 12:29:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60306' 00:05:05.824 12:29:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.824 12:29:31 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:05.824 12:29:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:05.824 12:29:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60306 /var/tmp/spdk-nbd.sock 00:05:05.824 12:29:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60306 ']' 00:05:05.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.824 12:29:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.824 12:29:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.824 12:29:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.824 12:29:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.824 12:29:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.082 [2024-07-12 12:29:31.918650] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:06.082 [2024-07-12 12:29:31.918756] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60306 ] 00:05:06.082 [2024-07-12 12:29:32.059354] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.340 [2024-07-12 12:29:32.236189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.340 [2024-07-12 12:29:32.236205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.340 [2024-07-12 12:29:32.326674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:07.275 12:29:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.275 12:29:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:07.275 12:29:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.275 Malloc0 00:05:07.275 12:29:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.532 Malloc1 00:05:07.532 12:29:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.532 12:29:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.532 12:29:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.532 12:29:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.532 12:29:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.532 12:29:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.532 12:29:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.532 12:29:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.532 12:29:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.532 12:29:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.532 12:29:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.532 12:29:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.532 12:29:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:07.532 12:29:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.532 12:29:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.532 12:29:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:07.789 /dev/nbd0 00:05:08.045 12:29:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:08.045 12:29:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:08.045 12:29:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:08.045 12:29:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:08.045 12:29:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:08.045 12:29:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:08.045 12:29:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:08.045 12:29:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:08.045 12:29:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:08.045 12:29:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:08.045 12:29:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.045 1+0 records in 00:05:08.045 1+0 records out 00:05:08.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000646556 s, 6.3 MB/s 00:05:08.045 12:29:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.045 12:29:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:08.045 12:29:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.045 12:29:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:08.045 12:29:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:08.045 12:29:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.045 12:29:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.045 12:29:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:08.301 /dev/nbd1 00:05:08.301 12:29:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:08.301 12:29:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:08.301 12:29:34 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:08.301 12:29:34 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:08.301 12:29:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:08.301 12:29:34 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:08.301 12:29:34 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:08.301 12:29:34 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:08.301 12:29:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:08.301 12:29:34 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:08.301 12:29:34 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.301 1+0 records in 00:05:08.301 1+0 records out 00:05:08.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416274 s, 9.8 MB/s 00:05:08.301 12:29:34 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.301 12:29:34 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:08.301 12:29:34 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.301 12:29:34 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:08.301 12:29:34 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:08.301 12:29:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.301 12:29:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.301 12:29:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.301 12:29:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.301 12:29:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:08.557 { 00:05:08.557 "nbd_device": "/dev/nbd0", 00:05:08.557 "bdev_name": "Malloc0" 00:05:08.557 }, 00:05:08.557 { 00:05:08.557 "nbd_device": "/dev/nbd1", 00:05:08.557 "bdev_name": "Malloc1" 00:05:08.557 } 00:05:08.557 ]' 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:08.557 { 00:05:08.557 "nbd_device": "/dev/nbd0", 00:05:08.557 "bdev_name": "Malloc0" 00:05:08.557 }, 00:05:08.557 { 00:05:08.557 "nbd_device": "/dev/nbd1", 00:05:08.557 "bdev_name": "Malloc1" 00:05:08.557 } 00:05:08.557 ]' 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:08.557 /dev/nbd1' 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:08.557 /dev/nbd1' 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:08.557 256+0 records in 00:05:08.557 256+0 records out 00:05:08.557 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00927215 s, 113 MB/s 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:08.557 256+0 records in 00:05:08.557 256+0 records out 00:05:08.557 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250062 s, 41.9 MB/s 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:08.557 256+0 records in 00:05:08.557 256+0 records out 00:05:08.557 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0311408 s, 33.7 MB/s 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.557 12:29:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:08.813 12:29:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:08.813 12:29:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:08.813 12:29:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:08.813 12:29:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.813 12:29:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.813 12:29:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:08.813 12:29:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.813 12:29:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.813 12:29:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.813 12:29:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.070 12:29:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.070 12:29:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.070 12:29:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.070 12:29:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.070 12:29:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.070 12:29:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.070 12:29:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.070 12:29:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.070 12:29:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.070 12:29:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.070 12:29:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.325 12:29:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.325 12:29:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:09.325 12:29:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.582 12:29:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:09.583 12:29:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:09.583 12:29:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.583 12:29:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:09.583 12:29:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:09.583 12:29:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:09.583 12:29:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:09.583 12:29:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:09.583 12:29:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:09.583 12:29:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:09.839 12:29:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:10.096 [2024-07-12 12:29:36.075911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.354 [2024-07-12 12:29:36.241833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.354 [2024-07-12 12:29:36.241843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.354 [2024-07-12 12:29:36.325972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:10.354 [2024-07-12 12:29:36.326100] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:10.354 [2024-07-12 12:29:36.326116] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:12.891 spdk_app_start Round 1 00:05:12.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.891 12:29:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.891 12:29:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:12.891 12:29:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60306 /var/tmp/spdk-nbd.sock 00:05:12.891 12:29:38 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60306 ']' 00:05:12.891 12:29:38 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.891 12:29:38 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.891 12:29:38 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.891 12:29:38 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.891 12:29:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.149 12:29:38 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.149 12:29:38 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:13.149 12:29:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.149 Malloc0 00:05:13.406 12:29:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.406 Malloc1 00:05:13.406 12:29:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.406 12:29:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.406 12:29:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.406 12:29:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:13.406 12:29:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.406 12:29:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:13.406 12:29:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.406 12:29:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.406 12:29:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.406 12:29:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:13.406 12:29:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.406 12:29:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:13.663 12:29:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:13.663 12:29:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:13.663 12:29:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.663 12:29:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.663 /dev/nbd0 00:05:13.663 12:29:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:13.663 12:29:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:13.663 12:29:39 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:13.663 12:29:39 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:13.663 12:29:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:13.663 12:29:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:13.663 12:29:39 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:13.663 12:29:39 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:13.663 12:29:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:13.663 12:29:39 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:13.663 12:29:39 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.663 1+0 records in 00:05:13.663 1+0 records out 00:05:13.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320217 s, 12.8 MB/s 00:05:13.663 12:29:39 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.921 12:29:39 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:13.921 12:29:39 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:13.921 12:29:39 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:13.921 12:29:39 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:13.921 12:29:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.921 12:29:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.921 12:29:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.921 /dev/nbd1 00:05:13.921 12:29:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.180 12:29:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.180 12:29:39 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:14.180 12:29:39 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:14.180 12:29:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:14.180 12:29:39 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:14.180 12:29:39 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:14.180 12:29:40 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:14.180 12:29:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:14.180 12:29:40 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:14.180 12:29:40 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.180 1+0 records in 00:05:14.180 1+0 records out 00:05:14.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640589 s, 6.4 MB/s 00:05:14.180 12:29:40 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.180 12:29:40 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:14.180 12:29:40 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.180 12:29:40 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:14.180 12:29:40 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:14.180 12:29:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.180 12:29:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.180 12:29:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.180 12:29:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.180 12:29:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.180 12:29:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:14.180 { 00:05:14.180 "nbd_device": "/dev/nbd0", 00:05:14.180 "bdev_name": "Malloc0" 00:05:14.180 }, 00:05:14.180 { 00:05:14.180 "nbd_device": "/dev/nbd1", 00:05:14.180 "bdev_name": "Malloc1" 00:05:14.180 } 00:05:14.180 ]' 00:05:14.180 12:29:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:14.180 { 00:05:14.180 "nbd_device": "/dev/nbd0", 00:05:14.180 "bdev_name": "Malloc0" 00:05:14.180 }, 00:05:14.180 { 00:05:14.180 "nbd_device": "/dev/nbd1", 00:05:14.180 "bdev_name": "Malloc1" 00:05:14.180 } 00:05:14.180 ]' 00:05:14.180 12:29:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.439 /dev/nbd1' 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.439 /dev/nbd1' 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.439 256+0 records in 00:05:14.439 256+0 records out 00:05:14.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00804924 s, 130 MB/s 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.439 256+0 records in 00:05:14.439 256+0 records out 00:05:14.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221892 s, 47.3 MB/s 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.439 256+0 records in 00:05:14.439 256+0 records out 00:05:14.439 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301374 s, 34.8 MB/s 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.439 12:29:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:14.699 12:29:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:14.699 12:29:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:14.699 12:29:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:14.699 12:29:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.699 12:29:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.699 12:29:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:14.699 12:29:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.699 12:29:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.699 12:29:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.699 12:29:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:14.957 12:29:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:14.957 12:29:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:14.957 12:29:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:14.957 12:29:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.957 12:29:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.957 12:29:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:14.957 12:29:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.957 12:29:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.957 12:29:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.958 12:29:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.958 12:29:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.219 12:29:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.219 12:29:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.219 12:29:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.219 12:29:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.219 12:29:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.219 12:29:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.219 12:29:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:15.219 12:29:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.219 12:29:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.219 12:29:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.219 12:29:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.219 12:29:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.219 12:29:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.477 12:29:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:16.044 [2024-07-12 12:29:41.864621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.044 [2024-07-12 12:29:42.024895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.044 [2024-07-12 12:29:42.024904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.044 [2024-07-12 12:29:42.107706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:16.044 [2024-07-12 12:29:42.107829] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:16.044 [2024-07-12 12:29:42.107843] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:18.575 spdk_app_start Round 2 00:05:18.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.575 12:29:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:18.575 12:29:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:18.575 12:29:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60306 /var/tmp/spdk-nbd.sock 00:05:18.575 12:29:44 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60306 ']' 00:05:18.575 12:29:44 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.575 12:29:44 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.575 12:29:44 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.575 12:29:44 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.575 12:29:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.833 12:29:44 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.833 12:29:44 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:18.833 12:29:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.091 Malloc0 00:05:19.091 12:29:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.349 Malloc1 00:05:19.349 12:29:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.349 12:29:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.349 12:29:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.349 12:29:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.349 12:29:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.349 12:29:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.349 12:29:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.349 12:29:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.349 12:29:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.349 12:29:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.349 12:29:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.349 12:29:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.349 12:29:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.349 12:29:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.349 12:29:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.350 12:29:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.608 /dev/nbd0 00:05:19.608 12:29:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.608 12:29:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.608 12:29:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:19.608 12:29:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:19.608 12:29:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:19.608 12:29:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:19.608 12:29:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:19.608 12:29:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:19.608 12:29:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:19.608 12:29:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:19.608 12:29:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.608 1+0 records in 00:05:19.608 1+0 records out 00:05:19.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000154135 s, 26.6 MB/s 00:05:19.608 12:29:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.608 12:29:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:19.608 12:29:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.608 12:29:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:19.608 12:29:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:19.608 12:29:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.608 12:29:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.608 12:29:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:19.866 /dev/nbd1 00:05:19.866 12:29:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:19.866 12:29:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:19.866 12:29:45 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:19.866 12:29:45 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:19.866 12:29:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:19.866 12:29:45 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:19.866 12:29:45 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:19.866 12:29:45 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:19.866 12:29:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:19.866 12:29:45 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:19.866 12:29:45 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.866 1+0 records in 00:05:19.866 1+0 records out 00:05:19.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407861 s, 10.0 MB/s 00:05:19.866 12:29:45 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.866 12:29:45 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:19.866 12:29:45 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.866 12:29:45 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:19.866 12:29:45 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:19.866 12:29:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.866 12:29:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.866 12:29:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.866 12:29:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.866 12:29:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.124 { 00:05:20.124 "nbd_device": "/dev/nbd0", 00:05:20.124 "bdev_name": "Malloc0" 00:05:20.124 }, 00:05:20.124 { 00:05:20.124 "nbd_device": "/dev/nbd1", 00:05:20.124 "bdev_name": "Malloc1" 00:05:20.124 } 00:05:20.124 ]' 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.124 { 00:05:20.124 "nbd_device": "/dev/nbd0", 00:05:20.124 "bdev_name": "Malloc0" 00:05:20.124 }, 00:05:20.124 { 00:05:20.124 "nbd_device": "/dev/nbd1", 00:05:20.124 "bdev_name": "Malloc1" 00:05:20.124 } 00:05:20.124 ]' 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.124 /dev/nbd1' 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.124 /dev/nbd1' 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.124 256+0 records in 00:05:20.124 256+0 records out 00:05:20.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00784921 s, 134 MB/s 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.124 12:29:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.387 256+0 records in 00:05:20.387 256+0 records out 00:05:20.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239828 s, 43.7 MB/s 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.387 256+0 records in 00:05:20.387 256+0 records out 00:05:20.387 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241631 s, 43.4 MB/s 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.387 12:29:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.645 12:29:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.645 12:29:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.645 12:29:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.645 12:29:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.645 12:29:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.645 12:29:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.645 12:29:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.645 12:29:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.645 12:29:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.645 12:29:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.903 12:29:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.903 12:29:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.903 12:29:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.903 12:29:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.903 12:29:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.903 12:29:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.903 12:29:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.903 12:29:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.903 12:29:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.903 12:29:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.903 12:29:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.161 12:29:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.161 12:29:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.161 12:29:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.161 12:29:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.161 12:29:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.161 12:29:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.161 12:29:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.161 12:29:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.161 12:29:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.161 12:29:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.161 12:29:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.161 12:29:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.161 12:29:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.420 12:29:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.987 [2024-07-12 12:29:47.764705] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.987 [2024-07-12 12:29:47.895033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.987 [2024-07-12 12:29:47.895046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.987 [2024-07-12 12:29:47.975324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:21.987 [2024-07-12 12:29:47.975461] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.987 [2024-07-12 12:29:47.975477] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.518 12:29:50 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60306 /var/tmp/spdk-nbd.sock 00:05:24.518 12:29:50 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60306 ']' 00:05:24.518 12:29:50 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.518 12:29:50 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.518 12:29:50 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.518 12:29:50 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.518 12:29:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.776 12:29:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.776 12:29:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:24.776 12:29:50 event.app_repeat -- event/event.sh@39 -- # killprocess 60306 00:05:24.776 12:29:50 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60306 ']' 00:05:24.776 12:29:50 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60306 00:05:24.776 12:29:50 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:24.776 12:29:50 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.776 12:29:50 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60306 00:05:24.776 killing process with pid 60306 00:05:24.776 12:29:50 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.776 12:29:50 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.776 12:29:50 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60306' 00:05:24.776 12:29:50 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60306 00:05:24.776 12:29:50 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60306 00:05:25.034 spdk_app_start is called in Round 0. 00:05:25.034 Shutdown signal received, stop current app iteration 00:05:25.034 Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 reinitialization... 00:05:25.034 spdk_app_start is called in Round 1. 00:05:25.034 Shutdown signal received, stop current app iteration 00:05:25.034 Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 reinitialization... 00:05:25.034 spdk_app_start is called in Round 2. 00:05:25.034 Shutdown signal received, stop current app iteration 00:05:25.034 Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 reinitialization... 00:05:25.034 spdk_app_start is called in Round 3. 00:05:25.034 Shutdown signal received, stop current app iteration 00:05:25.034 ************************************ 00:05:25.034 END TEST app_repeat 00:05:25.034 ************************************ 00:05:25.034 12:29:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:25.034 12:29:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:25.034 00:05:25.034 real 0m19.169s 00:05:25.034 user 0m42.244s 00:05:25.034 sys 0m3.078s 00:05:25.034 12:29:51 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.034 12:29:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.034 12:29:51 event -- common/autotest_common.sh@1142 -- # return 0 00:05:25.034 12:29:51 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:25.034 12:29:51 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:25.034 12:29:51 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.034 12:29:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.034 12:29:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.034 ************************************ 00:05:25.034 START TEST cpu_locks 00:05:25.034 ************************************ 00:05:25.034 12:29:51 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:25.291 * Looking for test storage... 00:05:25.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:25.291 12:29:51 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:25.291 12:29:51 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:25.291 12:29:51 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:25.291 12:29:51 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:25.291 12:29:51 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.291 12:29:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.291 12:29:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.291 ************************************ 00:05:25.291 START TEST default_locks 00:05:25.291 ************************************ 00:05:25.291 12:29:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:25.291 12:29:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60743 00:05:25.291 12:29:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60743 00:05:25.291 12:29:51 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60743 ']' 00:05:25.291 12:29:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.291 12:29:51 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.291 12:29:51 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.291 12:29:51 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.291 12:29:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.291 12:29:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.291 [2024-07-12 12:29:51.239690] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:25.291 [2024-07-12 12:29:51.239804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60743 ] 00:05:25.548 [2024-07-12 12:29:51.371084] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.548 [2024-07-12 12:29:51.526650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.548 [2024-07-12 12:29:51.603617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:26.113 12:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.113 12:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:26.113 12:29:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60743 00:05:26.113 12:29:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60743 00:05:26.113 12:29:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.679 12:29:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60743 00:05:26.679 12:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60743 ']' 00:05:26.679 12:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60743 00:05:26.679 12:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:26.679 12:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.679 12:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60743 00:05:26.679 12:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.679 12:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.679 killing process with pid 60743 00:05:26.679 12:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60743' 00:05:26.679 12:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60743 00:05:26.679 12:29:52 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60743 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60743 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60743 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60743 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60743 ']' 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.244 ERROR: process (pid: 60743) is no longer running 00:05:27.244 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60743) - No such process 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:27.244 00:05:27.244 real 0m2.062s 00:05:27.244 user 0m2.056s 00:05:27.244 sys 0m0.686s 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.244 12:29:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.244 ************************************ 00:05:27.244 END TEST default_locks 00:05:27.244 ************************************ 00:05:27.244 12:29:53 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:27.244 12:29:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:27.244 12:29:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.244 12:29:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.244 12:29:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.244 ************************************ 00:05:27.244 START TEST default_locks_via_rpc 00:05:27.244 ************************************ 00:05:27.244 12:29:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:27.244 12:29:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60795 00:05:27.244 12:29:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.244 12:29:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60795 00:05:27.244 12:29:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60795 ']' 00:05:27.244 12:29:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.244 12:29:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.244 12:29:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.244 12:29:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.244 12:29:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.501 [2024-07-12 12:29:53.355597] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:27.501 [2024-07-12 12:29:53.355709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60795 ] 00:05:27.501 [2024-07-12 12:29:53.494617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.758 [2024-07-12 12:29:53.668421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.758 [2024-07-12 12:29:53.741421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60795 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60795 00:05:28.322 12:29:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.885 12:29:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60795 00:05:28.885 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60795 ']' 00:05:28.885 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60795 00:05:28.885 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:28.885 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.885 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60795 00:05:28.885 killing process with pid 60795 00:05:28.885 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.885 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.885 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60795' 00:05:28.885 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60795 00:05:28.886 12:29:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60795 00:05:29.450 ************************************ 00:05:29.450 END TEST default_locks_via_rpc 00:05:29.450 ************************************ 00:05:29.450 00:05:29.450 real 0m2.233s 00:05:29.450 user 0m2.343s 00:05:29.450 sys 0m0.681s 00:05:29.450 12:29:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.450 12:29:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.707 12:29:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:29.708 12:29:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:29.708 12:29:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.708 12:29:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.708 12:29:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.708 ************************************ 00:05:29.708 START TEST non_locking_app_on_locked_coremask 00:05:29.708 ************************************ 00:05:29.708 12:29:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:29.708 12:29:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60846 00:05:29.708 12:29:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.708 12:29:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60846 /var/tmp/spdk.sock 00:05:29.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.708 12:29:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60846 ']' 00:05:29.708 12:29:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.708 12:29:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.708 12:29:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.708 12:29:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.708 12:29:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.708 [2024-07-12 12:29:55.634281] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:29.708 [2024-07-12 12:29:55.634443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60846 ] 00:05:29.708 [2024-07-12 12:29:55.779658] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.965 [2024-07-12 12:29:55.937154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.965 [2024-07-12 12:29:56.011719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:30.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.529 12:29:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.530 12:29:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:30.530 12:29:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60862 00:05:30.530 12:29:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:30.530 12:29:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60862 /var/tmp/spdk2.sock 00:05:30.530 12:29:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60862 ']' 00:05:30.530 12:29:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.530 12:29:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.530 12:29:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.530 12:29:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.530 12:29:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.787 [2024-07-12 12:29:56.638929] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:30.787 [2024-07-12 12:29:56.639338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60862 ] 00:05:30.787 [2024-07-12 12:29:56.780987] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.787 [2024-07-12 12:29:56.781047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.044 [2024-07-12 12:29:57.085690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.303 [2024-07-12 12:29:57.191850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:31.868 12:29:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.868 12:29:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:31.868 12:29:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60846 00:05:31.868 12:29:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60846 00:05:31.868 12:29:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.433 12:29:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60846 00:05:32.433 12:29:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60846 ']' 00:05:32.433 12:29:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60846 00:05:32.433 12:29:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:32.433 12:29:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:32.433 12:29:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60846 00:05:32.433 killing process with pid 60846 00:05:32.433 12:29:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:32.433 12:29:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:32.433 12:29:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60846' 00:05:32.433 12:29:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60846 00:05:32.433 12:29:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60846 00:05:33.804 12:29:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60862 00:05:33.804 12:29:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60862 ']' 00:05:33.804 12:29:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 60862 00:05:33.804 12:29:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:33.804 12:29:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.804 12:29:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60862 00:05:33.804 killing process with pid 60862 00:05:33.804 12:29:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:33.804 12:29:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:33.804 12:29:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60862' 00:05:33.804 12:29:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 60862 00:05:33.804 12:29:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 60862 00:05:34.061 ************************************ 00:05:34.061 END TEST non_locking_app_on_locked_coremask 00:05:34.061 ************************************ 00:05:34.061 00:05:34.061 real 0m4.447s 00:05:34.061 user 0m4.813s 00:05:34.061 sys 0m1.174s 00:05:34.061 12:30:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.061 12:30:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.061 12:30:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:34.061 12:30:00 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:34.061 12:30:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.061 12:30:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.061 12:30:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.061 ************************************ 00:05:34.061 START TEST locking_app_on_unlocked_coremask 00:05:34.062 ************************************ 00:05:34.062 12:30:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:34.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.062 12:30:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60935 00:05:34.062 12:30:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:34.062 12:30:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60935 /var/tmp/spdk.sock 00:05:34.062 12:30:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60935 ']' 00:05:34.062 12:30:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.062 12:30:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.062 12:30:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.062 12:30:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.062 12:30:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.062 [2024-07-12 12:30:00.115509] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:34.062 [2024-07-12 12:30:00.115605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60935 ] 00:05:34.319 [2024-07-12 12:30:00.245798] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.319 [2024-07-12 12:30:00.245901] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.577 [2024-07-12 12:30:00.413209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.577 [2024-07-12 12:30:00.467811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:35.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.141 12:30:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.141 12:30:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:35.142 12:30:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60951 00:05:35.142 12:30:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:35.142 12:30:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60951 /var/tmp/spdk2.sock 00:05:35.142 12:30:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 60951 ']' 00:05:35.142 12:30:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.142 12:30:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.142 12:30:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.142 12:30:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.142 12:30:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.142 [2024-07-12 12:30:01.128203] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:35.142 [2024-07-12 12:30:01.129081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60951 ] 00:05:35.400 [2024-07-12 12:30:01.272148] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.658 [2024-07-12 12:30:01.570298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.658 [2024-07-12 12:30:01.719956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:36.275 12:30:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.276 12:30:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:36.276 12:30:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60951 00:05:36.276 12:30:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60951 00:05:36.276 12:30:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.857 12:30:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60935 00:05:36.857 12:30:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60935 ']' 00:05:36.857 12:30:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60935 00:05:36.857 12:30:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:36.857 12:30:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.857 12:30:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60935 00:05:36.857 killing process with pid 60935 00:05:36.857 12:30:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.857 12:30:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.857 12:30:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60935' 00:05:36.857 12:30:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60935 00:05:36.857 12:30:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60935 00:05:37.788 12:30:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60951 00:05:37.788 12:30:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 60951 ']' 00:05:37.788 12:30:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 60951 00:05:37.788 12:30:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:37.788 12:30:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.788 12:30:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60951 00:05:37.788 killing process with pid 60951 00:05:37.788 12:30:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.788 12:30:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.788 12:30:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60951' 00:05:37.788 12:30:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 60951 00:05:37.788 12:30:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 60951 00:05:38.351 ************************************ 00:05:38.351 END TEST locking_app_on_unlocked_coremask 00:05:38.351 ************************************ 00:05:38.351 00:05:38.351 real 0m4.292s 00:05:38.351 user 0m4.666s 00:05:38.351 sys 0m1.109s 00:05:38.352 12:30:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.352 12:30:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.352 12:30:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:38.352 12:30:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:38.352 12:30:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.352 12:30:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.352 12:30:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.352 ************************************ 00:05:38.352 START TEST locking_app_on_locked_coremask 00:05:38.352 ************************************ 00:05:38.352 12:30:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:38.352 12:30:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61023 00:05:38.352 12:30:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61023 /var/tmp/spdk.sock 00:05:38.352 12:30:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.352 12:30:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61023 ']' 00:05:38.352 12:30:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.352 12:30:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:38.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.352 12:30:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.352 12:30:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:38.352 12:30:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.609 [2024-07-12 12:30:04.453568] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:38.609 [2024-07-12 12:30:04.453700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61023 ] 00:05:38.609 [2024-07-12 12:30:04.593567] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.866 [2024-07-12 12:30:04.750288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.866 [2024-07-12 12:30:04.829732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61039 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61039 /var/tmp/spdk2.sock 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61039 /var/tmp/spdk2.sock 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:39.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 61039 /var/tmp/spdk2.sock 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61039 ']' 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.505 12:30:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.505 [2024-07-12 12:30:05.533653] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:39.505 [2024-07-12 12:30:05.533786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61039 ] 00:05:39.762 [2024-07-12 12:30:05.685131] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61023 has claimed it. 00:05:39.762 [2024-07-12 12:30:05.685219] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:40.327 ERROR: process (pid: 61039) is no longer running 00:05:40.327 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61039) - No such process 00:05:40.327 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.327 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:40.327 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:40.327 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.327 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:40.327 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.327 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61023 00:05:40.327 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61023 00:05:40.327 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.585 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61023 00:05:40.585 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61023 ']' 00:05:40.585 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61023 00:05:40.585 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:40.585 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.585 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61023 00:05:40.585 killing process with pid 61023 00:05:40.585 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.585 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.585 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61023' 00:05:40.585 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61023 00:05:40.585 12:30:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61023 00:05:41.152 00:05:41.152 real 0m2.783s 00:05:41.152 user 0m3.135s 00:05:41.152 sys 0m0.699s 00:05:41.152 12:30:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.152 12:30:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.152 ************************************ 00:05:41.152 END TEST locking_app_on_locked_coremask 00:05:41.152 ************************************ 00:05:41.152 12:30:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:41.152 12:30:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:41.152 12:30:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.152 12:30:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.152 12:30:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.152 ************************************ 00:05:41.152 START TEST locking_overlapped_coremask 00:05:41.152 ************************************ 00:05:41.152 12:30:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:41.152 12:30:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61085 00:05:41.152 12:30:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61085 /var/tmp/spdk.sock 00:05:41.152 12:30:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61085 ']' 00:05:41.152 12:30:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.152 12:30:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:41.152 12:30:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.152 12:30:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.152 12:30:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.152 12:30:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.411 [2024-07-12 12:30:07.276053] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:41.411 [2024-07-12 12:30:07.276154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61085 ] 00:05:41.411 [2024-07-12 12:30:07.415152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.669 [2024-07-12 12:30:07.582325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.669 [2024-07-12 12:30:07.582483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.669 [2024-07-12 12:30:07.582488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.669 [2024-07-12 12:30:07.664872] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61103 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61103 /var/tmp/spdk2.sock 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61103 /var/tmp/spdk2.sock 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:42.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 61103 /var/tmp/spdk2.sock 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61103 ']' 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.233 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.233 [2024-07-12 12:30:08.240040] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:42.233 [2024-07-12 12:30:08.240151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61103 ] 00:05:42.490 [2024-07-12 12:30:08.387104] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61085 has claimed it. 00:05:42.490 [2024-07-12 12:30:08.387207] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:43.054 ERROR: process (pid: 61103) is no longer running 00:05:43.054 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61103) - No such process 00:05:43.054 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.054 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:43.054 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:43.054 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.054 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:43.054 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.054 12:30:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:43.054 12:30:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:43.054 12:30:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:43.054 12:30:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:43.054 12:30:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61085 00:05:43.054 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 61085 ']' 00:05:43.054 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 61085 00:05:43.054 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:43.054 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:43.054 12:30:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61085 00:05:43.054 12:30:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:43.054 12:30:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:43.054 12:30:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61085' 00:05:43.054 killing process with pid 61085 00:05:43.054 12:30:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 61085 00:05:43.054 12:30:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 61085 00:05:43.619 00:05:43.619 real 0m2.375s 00:05:43.619 user 0m6.258s 00:05:43.619 sys 0m0.533s 00:05:43.619 12:30:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.619 12:30:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.619 ************************************ 00:05:43.619 END TEST locking_overlapped_coremask 00:05:43.619 ************************************ 00:05:43.619 12:30:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:43.619 12:30:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:43.619 12:30:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.619 12:30:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.619 12:30:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.619 ************************************ 00:05:43.619 START TEST locking_overlapped_coremask_via_rpc 00:05:43.619 ************************************ 00:05:43.619 12:30:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:43.619 12:30:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61148 00:05:43.619 12:30:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61148 /var/tmp/spdk.sock 00:05:43.619 12:30:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61148 ']' 00:05:43.619 12:30:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:43.619 12:30:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.619 12:30:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.619 12:30:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.619 12:30:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.619 12:30:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.877 [2024-07-12 12:30:09.694378] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:43.877 [2024-07-12 12:30:09.694511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61148 ] 00:05:43.877 [2024-07-12 12:30:09.829577] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.877 [2024-07-12 12:30:09.829663] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.134 [2024-07-12 12:30:09.991868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.134 [2024-07-12 12:30:09.991970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.135 [2024-07-12 12:30:09.991973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.135 [2024-07-12 12:30:10.076022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:44.698 12:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.698 12:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:44.698 12:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61165 00:05:44.698 12:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:44.698 12:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61165 /var/tmp/spdk2.sock 00:05:44.698 12:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61165 ']' 00:05:44.698 12:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.698 12:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.698 12:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.698 12:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.698 12:30:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.699 [2024-07-12 12:30:10.694024] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:44.699 [2024-07-12 12:30:10.694334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61165 ] 00:05:44.955 [2024-07-12 12:30:10.836280] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.955 [2024-07-12 12:30:10.836344] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.212 [2024-07-12 12:30:11.133544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.212 [2024-07-12 12:30:11.137468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:45.212 [2024-07-12 12:30:11.137470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.212 [2024-07-12 12:30:11.243765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.776 [2024-07-12 12:30:11.687559] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61148 has claimed it. 00:05:45.776 request: 00:05:45.776 { 00:05:45.776 "method": "framework_enable_cpumask_locks", 00:05:45.776 "req_id": 1 00:05:45.776 } 00:05:45.776 Got JSON-RPC error response 00:05:45.776 response: 00:05:45.776 { 00:05:45.776 "code": -32603, 00:05:45.776 "message": "Failed to claim CPU core: 2" 00:05:45.776 } 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61148 /var/tmp/spdk.sock 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61148 ']' 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.776 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.033 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.033 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:46.033 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61165 /var/tmp/spdk2.sock 00:05:46.033 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61165 ']' 00:05:46.033 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.033 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.033 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.033 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.033 12:30:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.291 ************************************ 00:05:46.291 END TEST locking_overlapped_coremask_via_rpc 00:05:46.291 ************************************ 00:05:46.291 12:30:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.291 12:30:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:46.291 12:30:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:46.291 12:30:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:46.291 12:30:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:46.291 12:30:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:46.291 00:05:46.291 real 0m2.558s 00:05:46.291 user 0m1.286s 00:05:46.291 sys 0m0.187s 00:05:46.291 12:30:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.291 12:30:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.291 12:30:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:46.291 12:30:12 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:46.291 12:30:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61148 ]] 00:05:46.291 12:30:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61148 00:05:46.291 12:30:12 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61148 ']' 00:05:46.291 12:30:12 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61148 00:05:46.291 12:30:12 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:46.291 12:30:12 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.291 12:30:12 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61148 00:05:46.291 killing process with pid 61148 00:05:46.291 12:30:12 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.291 12:30:12 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.291 12:30:12 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61148' 00:05:46.291 12:30:12 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61148 00:05:46.291 12:30:12 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61148 00:05:46.856 12:30:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61165 ]] 00:05:46.856 12:30:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61165 00:05:46.856 12:30:12 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61165 ']' 00:05:46.856 12:30:12 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61165 00:05:46.856 12:30:12 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:46.856 12:30:12 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.856 12:30:12 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61165 00:05:46.856 killing process with pid 61165 00:05:46.856 12:30:12 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:46.856 12:30:12 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:46.856 12:30:12 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61165' 00:05:46.856 12:30:12 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61165 00:05:46.856 12:30:12 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61165 00:05:47.427 12:30:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:47.427 12:30:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:47.427 12:30:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61148 ]] 00:05:47.427 12:30:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61148 00:05:47.427 12:30:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61148 ']' 00:05:47.427 12:30:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61148 00:05:47.427 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61148) - No such process 00:05:47.427 Process with pid 61148 is not found 00:05:47.427 Process with pid 61165 is not found 00:05:47.427 12:30:13 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61148 is not found' 00:05:47.427 12:30:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61165 ]] 00:05:47.427 12:30:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61165 00:05:47.427 12:30:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61165 ']' 00:05:47.427 12:30:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61165 00:05:47.427 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61165) - No such process 00:05:47.427 12:30:13 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61165 is not found' 00:05:47.427 12:30:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:47.427 00:05:47.427 real 0m22.195s 00:05:47.427 user 0m37.090s 00:05:47.427 sys 0m6.004s 00:05:47.427 12:30:13 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.427 12:30:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.427 ************************************ 00:05:47.427 END TEST cpu_locks 00:05:47.427 ************************************ 00:05:47.427 12:30:13 event -- common/autotest_common.sh@1142 -- # return 0 00:05:47.427 ************************************ 00:05:47.427 END TEST event 00:05:47.427 ************************************ 00:05:47.427 00:05:47.427 real 0m49.267s 00:05:47.427 user 1m31.943s 00:05:47.427 sys 0m9.973s 00:05:47.427 12:30:13 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.427 12:30:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.427 12:30:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:47.427 12:30:13 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:47.427 12:30:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:47.427 12:30:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.427 12:30:13 -- common/autotest_common.sh@10 -- # set +x 00:05:47.427 ************************************ 00:05:47.427 START TEST thread 00:05:47.427 ************************************ 00:05:47.427 12:30:13 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:47.427 * Looking for test storage... 00:05:47.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:47.427 12:30:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:47.427 12:30:13 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:47.427 12:30:13 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.427 12:30:13 thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.427 ************************************ 00:05:47.427 START TEST thread_poller_perf 00:05:47.427 ************************************ 00:05:47.427 12:30:13 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:47.427 [2024-07-12 12:30:13.488670] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:47.427 [2024-07-12 12:30:13.488770] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61289 ] 00:05:47.684 [2024-07-12 12:30:13.628756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.942 [2024-07-12 12:30:13.775579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.942 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:48.875 ====================================== 00:05:48.875 busy:2211391556 (cyc) 00:05:48.875 total_run_count: 310000 00:05:48.875 tsc_hz: 2200000000 (cyc) 00:05:48.875 ====================================== 00:05:48.875 poller_cost: 7133 (cyc), 3242 (nsec) 00:05:48.875 00:05:48.875 real 0m1.432s 00:05:48.875 user 0m1.258s 00:05:48.875 sys 0m0.066s 00:05:48.875 12:30:14 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.876 12:30:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:48.876 ************************************ 00:05:48.876 END TEST thread_poller_perf 00:05:48.876 ************************************ 00:05:48.876 12:30:14 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:48.876 12:30:14 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:48.876 12:30:14 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:48.876 12:30:14 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.876 12:30:14 thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.134 ************************************ 00:05:49.134 START TEST thread_poller_perf 00:05:49.134 ************************************ 00:05:49.134 12:30:14 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:49.134 [2024-07-12 12:30:14.973350] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:49.134 [2024-07-12 12:30:14.973473] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61324 ] 00:05:49.134 [2024-07-12 12:30:15.114355] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.391 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:49.391 [2024-07-12 12:30:15.263928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.326 ====================================== 00:05:50.326 busy:2202460923 (cyc) 00:05:50.326 total_run_count: 3956000 00:05:50.326 tsc_hz: 2200000000 (cyc) 00:05:50.326 ====================================== 00:05:50.326 poller_cost: 556 (cyc), 252 (nsec) 00:05:50.326 00:05:50.326 real 0m1.441s 00:05:50.326 user 0m1.260s 00:05:50.326 sys 0m0.073s 00:05:50.326 12:30:16 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.326 12:30:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:50.326 ************************************ 00:05:50.326 END TEST thread_poller_perf 00:05:50.326 ************************************ 00:05:50.584 12:30:16 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:50.584 12:30:16 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:50.584 ************************************ 00:05:50.584 END TEST thread 00:05:50.584 ************************************ 00:05:50.584 00:05:50.584 real 0m3.060s 00:05:50.584 user 0m2.586s 00:05:50.584 sys 0m0.255s 00:05:50.584 12:30:16 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.584 12:30:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.584 12:30:16 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.584 12:30:16 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:50.584 12:30:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.584 12:30:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.584 12:30:16 -- common/autotest_common.sh@10 -- # set +x 00:05:50.584 ************************************ 00:05:50.584 START TEST accel 00:05:50.584 ************************************ 00:05:50.584 12:30:16 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:50.584 * Looking for test storage... 00:05:50.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:50.584 12:30:16 accel -- accel/accel.sh@95 -- # declare -A expected_opcs 00:05:50.584 12:30:16 accel -- accel/accel.sh@96 -- # get_expected_opcs 00:05:50.584 12:30:16 accel -- accel/accel.sh@69 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:50.584 12:30:16 accel -- accel/accel.sh@71 -- # spdk_tgt_pid=61399 00:05:50.584 12:30:16 accel -- accel/accel.sh@72 -- # waitforlisten 61399 00:05:50.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.584 12:30:16 accel -- common/autotest_common.sh@829 -- # '[' -z 61399 ']' 00:05:50.584 12:30:16 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.584 12:30:16 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.584 12:30:16 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.584 12:30:16 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.584 12:30:16 accel -- accel/accel.sh@70 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:50.584 12:30:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.584 12:30:16 accel -- accel/accel.sh@70 -- # build_accel_config 00:05:50.584 12:30:16 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.584 12:30:16 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.584 12:30:16 accel -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:50.584 12:30:16 accel -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:50.584 12:30:16 accel -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:50.584 12:30:16 accel -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:50.584 12:30:16 accel -- accel/accel.sh@49 -- # local IFS=, 00:05:50.584 12:30:16 accel -- accel/accel.sh@50 -- # jq -r . 00:05:50.584 [2024-07-12 12:30:16.636590] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:50.584 [2024-07-12 12:30:16.636708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61399 ] 00:05:50.871 [2024-07-12 12:30:16.774479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.128 [2024-07-12 12:30:16.949668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.128 [2024-07-12 12:30:17.007906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:05:51.694 12:30:17 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.694 12:30:17 accel -- common/autotest_common.sh@862 -- # return 0 00:05:51.694 12:30:17 accel -- accel/accel.sh@74 -- # [[ 0 -gt 0 ]] 00:05:51.694 12:30:17 accel -- accel/accel.sh@77 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:51.694 12:30:17 accel -- accel/accel.sh@78 -- # [[ 0 -gt 0 ]] 00:05:51.694 12:30:17 accel -- accel/accel.sh@81 -- # [[ 0 -gt 0 ]] 00:05:51.694 12:30:17 accel -- accel/accel.sh@82 -- # [[ -n '' ]] 00:05:51.694 12:30:17 accel -- accel/accel.sh@84 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:51.694 12:30:17 accel -- accel/accel.sh@84 -- # rpc_cmd accel_get_opc_assignments 00:05:51.694 12:30:17 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.694 12:30:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.694 12:30:17 accel -- accel/accel.sh@84 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:51.694 12:30:17 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.694 12:30:17 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.694 12:30:17 accel -- accel/accel.sh@86 -- # IFS== 00:05:51.694 12:30:17 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:51.694 12:30:17 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:51.694 12:30:17 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.694 12:30:17 accel -- accel/accel.sh@86 -- # IFS== 00:05:51.694 12:30:17 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:51.694 12:30:17 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:51.694 12:30:17 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.694 12:30:17 accel -- accel/accel.sh@86 -- # IFS== 00:05:51.694 12:30:17 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:51.694 12:30:17 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:51.694 12:30:17 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.694 12:30:17 accel -- accel/accel.sh@86 -- # IFS== 00:05:51.694 12:30:17 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:51.694 12:30:17 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:51.694 12:30:17 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.694 12:30:17 accel -- accel/accel.sh@86 -- # IFS== 00:05:51.694 12:30:17 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:51.694 12:30:17 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:51.694 12:30:17 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.694 12:30:17 accel -- accel/accel.sh@86 -- # IFS== 00:05:51.694 12:30:17 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:51.694 12:30:17 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:51.694 12:30:17 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # IFS== 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:51.695 12:30:17 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:51.695 12:30:17 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # IFS== 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:51.695 12:30:17 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:51.695 12:30:17 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # IFS== 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:51.695 12:30:17 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:51.695 12:30:17 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # IFS== 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:51.695 12:30:17 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:51.695 12:30:17 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # IFS== 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:51.695 12:30:17 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:51.695 12:30:17 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # IFS== 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:51.695 12:30:17 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:51.695 12:30:17 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # IFS== 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:51.695 12:30:17 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:51.695 12:30:17 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # IFS== 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:51.695 12:30:17 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:51.695 12:30:17 accel -- accel/accel.sh@85 -- # for opc_opt in "${exp_opcs[@]}" 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # IFS== 00:05:51.695 12:30:17 accel -- accel/accel.sh@86 -- # read -r opc module 00:05:51.695 12:30:17 accel -- accel/accel.sh@87 -- # expected_opcs["$opc"]=software 00:05:51.695 12:30:17 accel -- accel/accel.sh@89 -- # killprocess 61399 00:05:51.695 12:30:17 accel -- common/autotest_common.sh@948 -- # '[' -z 61399 ']' 00:05:51.695 12:30:17 accel -- common/autotest_common.sh@952 -- # kill -0 61399 00:05:51.695 12:30:17 accel -- common/autotest_common.sh@953 -- # uname 00:05:51.695 12:30:17 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.695 12:30:17 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61399 00:05:51.695 killing process with pid 61399 00:05:51.695 12:30:17 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.695 12:30:17 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.695 12:30:17 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61399' 00:05:51.695 12:30:17 accel -- common/autotest_common.sh@967 -- # kill 61399 00:05:51.695 12:30:17 accel -- common/autotest_common.sh@972 -- # wait 61399 00:05:52.261 12:30:18 accel -- accel/accel.sh@90 -- # trap - ERR 00:05:52.261 12:30:18 accel -- accel/accel.sh@103 -- # run_test accel_help accel_perf -h 00:05:52.261 12:30:18 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:52.261 12:30:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.261 12:30:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.261 12:30:18 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:52.261 12:30:18 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:52.261 12:30:18 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:52.261 12:30:18 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.261 12:30:18 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.261 12:30:18 accel.accel_help -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:52.261 12:30:18 accel.accel_help -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:52.261 12:30:18 accel.accel_help -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:52.261 12:30:18 accel.accel_help -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:52.261 12:30:18 accel.accel_help -- accel/accel.sh@49 -- # local IFS=, 00:05:52.261 12:30:18 accel.accel_help -- accel/accel.sh@50 -- # jq -r . 00:05:52.261 12:30:18 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.261 12:30:18 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:52.261 12:30:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.261 12:30:18 accel -- accel/accel.sh@105 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:52.261 12:30:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:52.261 12:30:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.261 12:30:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.261 ************************************ 00:05:52.261 START TEST accel_missing_filename 00:05:52.261 ************************************ 00:05:52.261 12:30:18 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:52.261 12:30:18 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:52.261 12:30:18 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:52.261 12:30:18 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:52.261 12:30:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.261 12:30:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:52.261 12:30:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:52.261 12:30:18 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:52.261 12:30:18 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:52.261 12:30:18 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:52.261 12:30:18 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.261 12:30:18 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.261 12:30:18 accel.accel_missing_filename -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:52.261 12:30:18 accel.accel_missing_filename -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:52.261 12:30:18 accel.accel_missing_filename -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:52.261 12:30:18 accel.accel_missing_filename -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:52.261 12:30:18 accel.accel_missing_filename -- accel/accel.sh@49 -- # local IFS=, 00:05:52.261 12:30:18 accel.accel_missing_filename -- accel/accel.sh@50 -- # jq -r . 00:05:52.261 [2024-07-12 12:30:18.279804] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:52.261 [2024-07-12 12:30:18.279893] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61456 ] 00:05:52.520 [2024-07-12 12:30:18.414822] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.520 [2024-07-12 12:30:18.562017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.779 [2024-07-12 12:30:18.617789] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.779 [2024-07-12 12:30:18.696466] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:52.779 A filename is required. 00:05:52.779 12:30:18 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:52.779 12:30:18 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:52.779 12:30:18 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:52.779 12:30:18 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:52.779 12:30:18 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:52.779 ************************************ 00:05:52.779 END TEST accel_missing_filename 00:05:52.779 ************************************ 00:05:52.779 12:30:18 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:52.779 00:05:52.779 real 0m0.551s 00:05:52.779 user 0m0.378s 00:05:52.779 sys 0m0.121s 00:05:52.779 12:30:18 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.779 12:30:18 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:52.779 12:30:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.779 12:30:18 accel -- accel/accel.sh@107 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:52.779 12:30:18 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:52.779 12:30:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.779 12:30:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.037 ************************************ 00:05:53.037 START TEST accel_compress_verify 00:05:53.037 ************************************ 00:05:53.037 12:30:18 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.037 12:30:18 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:53.037 12:30:18 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.037 12:30:18 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:53.037 12:30:18 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.037 12:30:18 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:53.037 12:30:18 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.037 12:30:18 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.037 12:30:18 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:53.037 12:30:18 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:53.037 12:30:18 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.037 12:30:18 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.037 12:30:18 accel.accel_compress_verify -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:53.037 12:30:18 accel.accel_compress_verify -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:53.037 12:30:18 accel.accel_compress_verify -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:53.037 12:30:18 accel.accel_compress_verify -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:53.037 12:30:18 accel.accel_compress_verify -- accel/accel.sh@49 -- # local IFS=, 00:05:53.037 12:30:18 accel.accel_compress_verify -- accel/accel.sh@50 -- # jq -r . 00:05:53.037 [2024-07-12 12:30:18.883219] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:53.037 [2024-07-12 12:30:18.883325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61475 ] 00:05:53.037 [2024-07-12 12:30:19.015649] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.295 [2024-07-12 12:30:19.155876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.295 [2024-07-12 12:30:19.211654] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:53.295 [2024-07-12 12:30:19.289526] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:53.555 00:05:53.555 Compression does not support the verify option, aborting. 00:05:53.555 12:30:19 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:53.555 12:30:19 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.555 12:30:19 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:53.555 12:30:19 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:53.555 12:30:19 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:53.555 12:30:19 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.555 00:05:53.555 real 0m0.547s 00:05:53.555 user 0m0.375s 00:05:53.555 sys 0m0.114s 00:05:53.555 12:30:19 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.555 12:30:19 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:53.555 ************************************ 00:05:53.555 END TEST accel_compress_verify 00:05:53.555 ************************************ 00:05:53.555 12:30:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.555 12:30:19 accel -- accel/accel.sh@109 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:53.555 12:30:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:53.555 12:30:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.555 12:30:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.555 ************************************ 00:05:53.555 START TEST accel_wrong_workload 00:05:53.555 ************************************ 00:05:53.555 12:30:19 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:53.555 12:30:19 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:53.555 12:30:19 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:53.555 12:30:19 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:53.555 12:30:19 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.555 12:30:19 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:53.555 12:30:19 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.555 12:30:19 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:53.555 12:30:19 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:53.555 12:30:19 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:53.555 12:30:19 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.555 12:30:19 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.555 12:30:19 accel.accel_wrong_workload -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:53.555 12:30:19 accel.accel_wrong_workload -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:53.555 12:30:19 accel.accel_wrong_workload -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:53.555 12:30:19 accel.accel_wrong_workload -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:53.555 12:30:19 accel.accel_wrong_workload -- accel/accel.sh@49 -- # local IFS=, 00:05:53.555 12:30:19 accel.accel_wrong_workload -- accel/accel.sh@50 -- # jq -r . 00:05:53.555 Unsupported workload type: foobar 00:05:53.555 [2024-07-12 12:30:19.482060] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:53.555 accel_perf options: 00:05:53.555 [-h help message] 00:05:53.555 [-q queue depth per core] 00:05:53.555 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:53.555 [-T number of threads per core 00:05:53.555 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:53.555 [-t time in seconds] 00:05:53.555 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:53.555 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:53.555 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:53.555 [-l for compress/decompress workloads, name of uncompressed input file 00:05:53.555 [-S for crc32c workload, use this seed value (default 0) 00:05:53.555 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:53.555 [-f for fill workload, use this BYTE value (default 255) 00:05:53.555 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:53.555 [-y verify result if this switch is on] 00:05:53.555 [-a tasks to allocate per core (default: same value as -q)] 00:05:53.555 Can be used to spread operations across a wider range of memory. 00:05:53.555 12:30:19 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:53.555 12:30:19 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.555 12:30:19 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:53.555 12:30:19 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.555 00:05:53.555 real 0m0.029s 00:05:53.555 user 0m0.015s 00:05:53.555 sys 0m0.013s 00:05:53.555 ************************************ 00:05:53.555 END TEST accel_wrong_workload 00:05:53.555 ************************************ 00:05:53.555 12:30:19 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.555 12:30:19 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:53.555 12:30:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.555 12:30:19 accel -- accel/accel.sh@111 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:53.555 12:30:19 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:53.555 12:30:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.555 12:30:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.555 ************************************ 00:05:53.555 START TEST accel_negative_buffers 00:05:53.555 ************************************ 00:05:53.555 12:30:19 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:53.555 12:30:19 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:53.555 12:30:19 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:53.555 12:30:19 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:53.555 12:30:19 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.555 12:30:19 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:53.555 12:30:19 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.555 12:30:19 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:53.555 12:30:19 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:53.555 12:30:19 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:53.555 12:30:19 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.555 12:30:19 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.555 12:30:19 accel.accel_negative_buffers -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:53.555 12:30:19 accel.accel_negative_buffers -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:53.555 12:30:19 accel.accel_negative_buffers -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:53.555 12:30:19 accel.accel_negative_buffers -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:53.555 12:30:19 accel.accel_negative_buffers -- accel/accel.sh@49 -- # local IFS=, 00:05:53.556 12:30:19 accel.accel_negative_buffers -- accel/accel.sh@50 -- # jq -r . 00:05:53.556 -x option must be non-negative. 00:05:53.556 [2024-07-12 12:30:19.559205] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:53.556 accel_perf options: 00:05:53.556 [-h help message] 00:05:53.556 [-q queue depth per core] 00:05:53.556 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:53.556 [-T number of threads per core 00:05:53.556 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:53.556 [-t time in seconds] 00:05:53.556 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:53.556 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:53.556 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:53.556 [-l for compress/decompress workloads, name of uncompressed input file 00:05:53.556 [-S for crc32c workload, use this seed value (default 0) 00:05:53.556 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:53.556 [-f for fill workload, use this BYTE value (default 255) 00:05:53.556 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:53.556 [-y verify result if this switch is on] 00:05:53.556 [-a tasks to allocate per core (default: same value as -q)] 00:05:53.556 Can be used to spread operations across a wider range of memory. 00:05:53.556 12:30:19 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:53.556 12:30:19 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.556 12:30:19 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:53.556 12:30:19 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.556 00:05:53.556 real 0m0.033s 00:05:53.556 user 0m0.022s 00:05:53.556 sys 0m0.010s 00:05:53.556 12:30:19 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.556 ************************************ 00:05:53.556 END TEST accel_negative_buffers 00:05:53.556 ************************************ 00:05:53.556 12:30:19 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:53.556 12:30:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.556 12:30:19 accel -- accel/accel.sh@115 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:53.556 12:30:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:53.556 12:30:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.556 12:30:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.556 ************************************ 00:05:53.556 START TEST accel_crc32c 00:05:53.556 ************************************ 00:05:53.556 12:30:19 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:53.556 12:30:19 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:53.556 12:30:19 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:53.556 12:30:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:53.556 12:30:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:53.556 12:30:19 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:53.556 12:30:19 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:53.556 12:30:19 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:53.556 12:30:19 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.556 12:30:19 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.556 12:30:19 accel.accel_crc32c -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:53.556 12:30:19 accel.accel_crc32c -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:53.556 12:30:19 accel.accel_crc32c -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:53.556 12:30:19 accel.accel_crc32c -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:53.556 12:30:19 accel.accel_crc32c -- accel/accel.sh@49 -- # local IFS=, 00:05:53.556 12:30:19 accel.accel_crc32c -- accel/accel.sh@50 -- # jq -r . 00:05:53.815 [2024-07-12 12:30:19.643868] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:53.815 [2024-07-12 12:30:19.643966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61539 ] 00:05:53.815 [2024-07-12 12:30:19.784062] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.074 [2024-07-12 12:30:19.954852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.074 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:54.075 12:30:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.452 ************************************ 00:05:55.452 END TEST accel_crc32c 00:05:55.452 ************************************ 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:55.452 12:30:21 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.452 00:05:55.452 real 0m1.589s 00:05:55.452 user 0m1.359s 00:05:55.452 sys 0m0.136s 00:05:55.452 12:30:21 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.452 12:30:21 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:55.452 12:30:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.452 12:30:21 accel -- accel/accel.sh@116 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:55.452 12:30:21 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:55.452 12:30:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.452 12:30:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.452 ************************************ 00:05:55.452 START TEST accel_crc32c_C2 00:05:55.452 ************************************ 00:05:55.452 12:30:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:55.452 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.452 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:55.452 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.452 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:55.452 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.452 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:55.452 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.452 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.452 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.452 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:55.452 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:55.452 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:55.452 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:55.452 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@49 -- # local IFS=, 00:05:55.452 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@50 -- # jq -r . 00:05:55.452 [2024-07-12 12:30:21.281754] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:55.452 [2024-07-12 12:30:21.281849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61579 ] 00:05:55.452 [2024-07-12 12:30:21.419888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.711 [2024-07-12 12:30:21.569077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:55.711 12:30:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.093 00:05:57.093 real 0m1.559s 00:05:57.093 user 0m1.346s 00:05:57.093 sys 0m0.119s 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.093 12:30:22 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:57.093 ************************************ 00:05:57.093 END TEST accel_crc32c_C2 00:05:57.093 ************************************ 00:05:57.093 12:30:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:57.093 12:30:22 accel -- accel/accel.sh@117 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:57.093 12:30:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:57.093 12:30:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.093 12:30:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.093 ************************************ 00:05:57.093 START TEST accel_copy 00:05:57.093 ************************************ 00:05:57.093 12:30:22 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:57.093 12:30:22 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:57.093 12:30:22 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:57.093 12:30:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.094 12:30:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.094 12:30:22 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:57.094 12:30:22 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:57.094 12:30:22 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:57.094 12:30:22 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.094 12:30:22 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.094 12:30:22 accel.accel_copy -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:57.094 12:30:22 accel.accel_copy -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:57.094 12:30:22 accel.accel_copy -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:57.094 12:30:22 accel.accel_copy -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:57.094 12:30:22 accel.accel_copy -- accel/accel.sh@49 -- # local IFS=, 00:05:57.094 12:30:22 accel.accel_copy -- accel/accel.sh@50 -- # jq -r . 00:05:57.094 [2024-07-12 12:30:22.897041] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:57.094 [2024-07-12 12:30:22.897150] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61608 ] 00:05:57.094 [2024-07-12 12:30:23.037743] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.358 [2024-07-12 12:30:23.194794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.358 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.359 12:30:23 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:57.359 12:30:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.359 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.359 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.359 12:30:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.359 12:30:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.359 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.359 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.359 12:30:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:57.359 12:30:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.359 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.359 12:30:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 ************************************ 00:05:58.733 END TEST accel_copy 00:05:58.733 ************************************ 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:58.733 12:30:24 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.733 00:05:58.733 real 0m1.598s 00:05:58.733 user 0m1.374s 00:05:58.733 sys 0m0.128s 00:05:58.733 12:30:24 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.733 12:30:24 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:58.733 12:30:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:58.733 12:30:24 accel -- accel/accel.sh@118 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:58.733 12:30:24 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:58.733 12:30:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.733 12:30:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.733 ************************************ 00:05:58.733 START TEST accel_fill 00:05:58.733 ************************************ 00:05:58.733 12:30:24 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:58.733 12:30:24 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:58.733 12:30:24 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:58.733 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.733 12:30:24 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:58.733 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.733 12:30:24 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:58.733 12:30:24 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:58.733 12:30:24 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.733 12:30:24 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.733 12:30:24 accel.accel_fill -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:05:58.733 12:30:24 accel.accel_fill -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:05:58.733 12:30:24 accel.accel_fill -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:05:58.733 12:30:24 accel.accel_fill -- accel/accel.sh@45 -- # [[ -n '' ]] 00:05:58.733 12:30:24 accel.accel_fill -- accel/accel.sh@49 -- # local IFS=, 00:05:58.733 12:30:24 accel.accel_fill -- accel/accel.sh@50 -- # jq -r . 00:05:58.733 [2024-07-12 12:30:24.546255] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:05:58.733 [2024-07-12 12:30:24.546356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61649 ] 00:05:58.733 [2024-07-12 12:30:24.685432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.991 [2024-07-12 12:30:24.811101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.991 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:58.992 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.992 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.992 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:58.992 12:30:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:58.992 12:30:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:58.992 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:58.992 12:30:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.371 12:30:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.371 12:30:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.371 12:30:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.371 12:30:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.371 12:30:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.371 12:30:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.371 12:30:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.371 12:30:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.371 12:30:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.371 12:30:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.371 12:30:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.371 12:30:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.372 12:30:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.372 12:30:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.372 12:30:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.372 12:30:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.372 12:30:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.372 12:30:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.372 12:30:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.372 12:30:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.372 12:30:26 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:00.372 12:30:26 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:00.372 12:30:26 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:00.372 12:30:26 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:00.372 12:30:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.372 12:30:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:00.372 12:30:26 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.372 00:06:00.372 real 0m1.547s 00:06:00.372 user 0m1.324s 00:06:00.372 sys 0m0.130s 00:06:00.372 12:30:26 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.372 ************************************ 00:06:00.372 END TEST accel_fill 00:06:00.372 ************************************ 00:06:00.372 12:30:26 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:00.372 12:30:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:00.372 12:30:26 accel -- accel/accel.sh@119 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:00.372 12:30:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:00.372 12:30:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.372 12:30:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.372 ************************************ 00:06:00.372 START TEST accel_copy_crc32c 00:06:00.372 ************************************ 00:06:00.372 12:30:26 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:00.372 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:00.372 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:00.372 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.372 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.372 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:00.372 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:00.372 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:00.372 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.372 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.372 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:00.372 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:00.372 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:00.372 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:00.372 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@49 -- # local IFS=, 00:06:00.372 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@50 -- # jq -r . 00:06:00.372 [2024-07-12 12:30:26.142880] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:00.372 [2024-07-12 12:30:26.142963] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61678 ] 00:06:00.372 [2024-07-12 12:30:26.279608] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.372 [2024-07-12 12:30:26.439031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:00.631 12:30:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.005 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.005 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.005 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.005 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.006 00:06:02.006 real 0m1.579s 00:06:02.006 user 0m1.357s 00:06:02.006 sys 0m0.131s 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.006 12:30:27 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:02.006 ************************************ 00:06:02.006 END TEST accel_copy_crc32c 00:06:02.006 ************************************ 00:06:02.006 12:30:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:02.006 12:30:27 accel -- accel/accel.sh@120 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:02.006 12:30:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:02.006 12:30:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.006 12:30:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.006 ************************************ 00:06:02.006 START TEST accel_copy_crc32c_C2 00:06:02.006 ************************************ 00:06:02.006 12:30:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:02.006 12:30:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:02.006 12:30:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:02.006 12:30:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.006 12:30:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:02.006 12:30:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.006 12:30:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:02.006 12:30:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:02.006 12:30:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.006 12:30:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.006 12:30:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:02.006 12:30:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:02.006 12:30:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:02.006 12:30:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:02.006 12:30:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@49 -- # local IFS=, 00:06:02.006 12:30:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@50 -- # jq -r . 00:06:02.006 [2024-07-12 12:30:27.771741] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:02.006 [2024-07-12 12:30:27.771849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61718 ] 00:06:02.006 [2024-07-12 12:30:27.902663] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.006 [2024-07-12 12:30:28.054332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:02.265 12:30:28 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.641 ************************************ 00:06:03.641 END TEST accel_copy_crc32c_C2 00:06:03.641 ************************************ 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.641 00:06:03.641 real 0m1.565s 00:06:03.641 user 0m1.351s 00:06:03.641 sys 0m0.124s 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.641 12:30:29 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:03.641 12:30:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.641 12:30:29 accel -- accel/accel.sh@121 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:03.641 12:30:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:03.641 12:30:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.641 12:30:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.641 ************************************ 00:06:03.641 START TEST accel_dualcast 00:06:03.641 ************************************ 00:06:03.641 12:30:29 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:03.641 12:30:29 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:03.641 12:30:29 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:03.641 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.641 12:30:29 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:03.641 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.641 12:30:29 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:03.641 12:30:29 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:03.641 12:30:29 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.641 12:30:29 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.641 12:30:29 accel.accel_dualcast -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:03.641 12:30:29 accel.accel_dualcast -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:03.641 12:30:29 accel.accel_dualcast -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:03.641 12:30:29 accel.accel_dualcast -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:03.641 12:30:29 accel.accel_dualcast -- accel/accel.sh@49 -- # local IFS=, 00:06:03.641 12:30:29 accel.accel_dualcast -- accel/accel.sh@50 -- # jq -r . 00:06:03.641 [2024-07-12 12:30:29.391265] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:03.641 [2024-07-12 12:30:29.392207] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61747 ] 00:06:03.641 [2024-07-12 12:30:29.527983] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.641 [2024-07-12 12:30:29.678348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:03.919 12:30:29 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:05.294 12:30:30 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.294 00:06:05.294 real 0m1.579s 00:06:05.294 user 0m1.349s 00:06:05.294 sys 0m0.129s 00:06:05.294 12:30:30 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.294 ************************************ 00:06:05.294 END TEST accel_dualcast 00:06:05.294 ************************************ 00:06:05.294 12:30:30 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:05.294 12:30:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.294 12:30:30 accel -- accel/accel.sh@122 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:05.294 12:30:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:05.294 12:30:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.294 12:30:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.294 ************************************ 00:06:05.294 START TEST accel_compare 00:06:05.294 ************************************ 00:06:05.294 12:30:30 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:05.294 12:30:30 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:05.294 12:30:30 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:05.294 12:30:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 12:30:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 12:30:30 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:05.294 12:30:30 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:05.294 12:30:30 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:05.294 12:30:30 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.294 12:30:30 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.294 12:30:30 accel.accel_compare -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:05.294 12:30:30 accel.accel_compare -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:05.294 12:30:30 accel.accel_compare -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:05.294 12:30:30 accel.accel_compare -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:05.294 12:30:30 accel.accel_compare -- accel/accel.sh@49 -- # local IFS=, 00:06:05.294 12:30:30 accel.accel_compare -- accel/accel.sh@50 -- # jq -r . 00:06:05.294 [2024-07-12 12:30:31.009005] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:05.294 [2024-07-12 12:30:31.009094] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61787 ] 00:06:05.294 [2024-07-12 12:30:31.145476] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.294 [2024-07-12 12:30:31.294388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.294 12:30:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:05.294 12:30:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.294 12:30:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:05.294 12:30:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.294 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.294 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:05.295 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:05.553 12:30:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:06.487 12:30:32 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.487 00:06:06.487 real 0m1.570s 00:06:06.487 user 0m1.351s 00:06:06.487 sys 0m0.125s 00:06:06.487 12:30:32 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.487 12:30:32 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:06.487 ************************************ 00:06:06.487 END TEST accel_compare 00:06:06.487 ************************************ 00:06:06.746 12:30:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:06.746 12:30:32 accel -- accel/accel.sh@123 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:06.746 12:30:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:06.746 12:30:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.746 12:30:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:06.746 ************************************ 00:06:06.746 START TEST accel_xor 00:06:06.746 ************************************ 00:06:06.746 12:30:32 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:06.746 12:30:32 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:06.746 12:30:32 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:06.746 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:06.746 12:30:32 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:06.746 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:06.746 12:30:32 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:06.746 12:30:32 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:06.746 12:30:32 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:06.746 12:30:32 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:06.746 12:30:32 accel.accel_xor -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:06.746 12:30:32 accel.accel_xor -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:06.746 12:30:32 accel.accel_xor -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:06.746 12:30:32 accel.accel_xor -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:06.746 12:30:32 accel.accel_xor -- accel/accel.sh@49 -- # local IFS=, 00:06:06.746 12:30:32 accel.accel_xor -- accel/accel.sh@50 -- # jq -r . 00:06:06.746 [2024-07-12 12:30:32.626030] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:06.746 [2024-07-12 12:30:32.626139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61822 ] 00:06:06.746 [2024-07-12 12:30:32.761592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.005 [2024-07-12 12:30:32.907287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:07.005 12:30:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.373 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.373 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.373 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.373 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.374 ************************************ 00:06:08.374 END TEST accel_xor 00:06:08.374 ************************************ 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.374 00:06:08.374 real 0m1.563s 00:06:08.374 user 0m1.343s 00:06:08.374 sys 0m0.126s 00:06:08.374 12:30:34 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.374 12:30:34 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:08.374 12:30:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.374 12:30:34 accel -- accel/accel.sh@124 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:08.374 12:30:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:08.374 12:30:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.374 12:30:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.374 ************************************ 00:06:08.374 START TEST accel_xor 00:06:08.374 ************************************ 00:06:08.374 12:30:34 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@49 -- # local IFS=, 00:06:08.374 12:30:34 accel.accel_xor -- accel/accel.sh@50 -- # jq -r . 00:06:08.374 [2024-07-12 12:30:34.240897] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:08.374 [2024-07-12 12:30:34.241050] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61858 ] 00:06:08.374 [2024-07-12 12:30:34.381305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.631 [2024-07-12 12:30:34.539999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:08.631 12:30:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:10.001 12:30:35 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.001 00:06:10.001 real 0m1.596s 00:06:10.001 user 0m1.363s 00:06:10.001 sys 0m0.137s 00:06:10.001 12:30:35 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.001 ************************************ 00:06:10.001 END TEST accel_xor 00:06:10.001 ************************************ 00:06:10.001 12:30:35 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:10.001 12:30:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.001 12:30:35 accel -- accel/accel.sh@125 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:10.001 12:30:35 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:10.001 12:30:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.001 12:30:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.001 ************************************ 00:06:10.001 START TEST accel_dif_verify 00:06:10.001 ************************************ 00:06:10.001 12:30:35 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:10.001 12:30:35 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:10.001 12:30:35 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:10.001 12:30:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.001 12:30:35 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.001 12:30:35 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:10.001 12:30:35 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:10.001 12:30:35 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:10.001 12:30:35 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.002 12:30:35 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.002 12:30:35 accel.accel_dif_verify -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:10.002 12:30:35 accel.accel_dif_verify -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:10.002 12:30:35 accel.accel_dif_verify -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:10.002 12:30:35 accel.accel_dif_verify -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:10.002 12:30:35 accel.accel_dif_verify -- accel/accel.sh@49 -- # local IFS=, 00:06:10.002 12:30:35 accel.accel_dif_verify -- accel/accel.sh@50 -- # jq -r . 00:06:10.002 [2024-07-12 12:30:35.887310] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:10.002 [2024-07-12 12:30:35.887481] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61898 ] 00:06:10.002 [2024-07-12 12:30:36.028757] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.260 [2024-07-12 12:30:36.173412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:10.260 12:30:36 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:11.669 12:30:37 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.669 00:06:11.669 real 0m1.601s 00:06:11.669 user 0m1.369s 00:06:11.669 sys 0m0.137s 00:06:11.669 12:30:37 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.669 12:30:37 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:11.669 ************************************ 00:06:11.669 END TEST accel_dif_verify 00:06:11.669 ************************************ 00:06:11.669 12:30:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.669 12:30:37 accel -- accel/accel.sh@126 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:11.669 12:30:37 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:11.669 12:30:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.669 12:30:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.669 ************************************ 00:06:11.669 START TEST accel_dif_generate 00:06:11.669 ************************************ 00:06:11.669 12:30:37 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:11.669 12:30:37 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:11.669 12:30:37 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:11.669 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.669 12:30:37 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:11.669 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.669 12:30:37 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:11.669 12:30:37 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:11.669 12:30:37 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.669 12:30:37 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.669 12:30:37 accel.accel_dif_generate -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:11.669 12:30:37 accel.accel_dif_generate -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:11.669 12:30:37 accel.accel_dif_generate -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:11.669 12:30:37 accel.accel_dif_generate -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:11.669 12:30:37 accel.accel_dif_generate -- accel/accel.sh@49 -- # local IFS=, 00:06:11.669 12:30:37 accel.accel_dif_generate -- accel/accel.sh@50 -- # jq -r . 00:06:11.669 [2024-07-12 12:30:37.539788] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:11.669 [2024-07-12 12:30:37.540084] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61927 ] 00:06:11.669 [2024-07-12 12:30:37.675359] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.946 [2024-07-12 12:30:37.828814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.946 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:11.947 12:30:37 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:13.322 12:30:39 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.322 00:06:13.322 real 0m1.594s 00:06:13.322 user 0m1.367s 00:06:13.322 sys 0m0.132s 00:06:13.322 12:30:39 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.322 12:30:39 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:13.322 ************************************ 00:06:13.322 END TEST accel_dif_generate 00:06:13.322 ************************************ 00:06:13.322 12:30:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:13.322 12:30:39 accel -- accel/accel.sh@127 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:13.322 12:30:39 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:13.322 12:30:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.322 12:30:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.322 ************************************ 00:06:13.322 START TEST accel_dif_generate_copy 00:06:13.322 ************************************ 00:06:13.322 12:30:39 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:13.322 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:13.322 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:13.322 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.322 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.322 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:13.322 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:13.322 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:13.322 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.322 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.322 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:13.322 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:13.322 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:13.322 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:13.322 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@49 -- # local IFS=, 00:06:13.322 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@50 -- # jq -r . 00:06:13.322 [2024-07-12 12:30:39.195951] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:13.322 [2024-07-12 12:30:39.196080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61968 ] 00:06:13.322 [2024-07-12 12:30:39.338212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.581 [2024-07-12 12:30:39.490339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.581 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:13.582 12:30:39 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.955 ************************************ 00:06:14.955 END TEST accel_dif_generate_copy 00:06:14.955 ************************************ 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.955 00:06:14.955 real 0m1.593s 00:06:14.955 user 0m1.374s 00:06:14.955 sys 0m0.125s 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.955 12:30:40 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:14.955 12:30:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.955 12:30:40 accel -- accel/accel.sh@129 -- # [[ y == y ]] 00:06:14.955 12:30:40 accel -- accel/accel.sh@130 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:14.955 12:30:40 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:14.955 12:30:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.955 12:30:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.955 ************************************ 00:06:14.955 START TEST accel_comp 00:06:14.955 ************************************ 00:06:14.955 12:30:40 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:14.955 12:30:40 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:14.955 12:30:40 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:14.955 12:30:40 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:14.955 12:30:40 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:14.955 12:30:40 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:14.955 12:30:40 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:14.955 12:30:40 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:14.955 12:30:40 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.955 12:30:40 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.955 12:30:40 accel.accel_comp -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:14.955 12:30:40 accel.accel_comp -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:14.955 12:30:40 accel.accel_comp -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:14.955 12:30:40 accel.accel_comp -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:14.955 12:30:40 accel.accel_comp -- accel/accel.sh@49 -- # local IFS=, 00:06:14.955 12:30:40 accel.accel_comp -- accel/accel.sh@50 -- # jq -r . 00:06:14.955 [2024-07-12 12:30:40.830574] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:14.955 [2024-07-12 12:30:40.830688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62003 ] 00:06:14.955 [2024-07-12 12:30:40.967877] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.213 [2024-07-12 12:30:41.122840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:15.213 12:30:41 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:16.589 ************************************ 00:06:16.589 END TEST accel_comp 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:16.589 12:30:42 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.589 00:06:16.589 real 0m1.580s 00:06:16.589 user 0m1.361s 00:06:16.589 sys 0m0.125s 00:06:16.589 12:30:42 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.589 12:30:42 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:16.589 ************************************ 00:06:16.589 12:30:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.589 12:30:42 accel -- accel/accel.sh@131 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:16.589 12:30:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:16.589 12:30:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.589 12:30:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.589 ************************************ 00:06:16.589 START TEST accel_decomp 00:06:16.589 ************************************ 00:06:16.589 12:30:42 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:16.589 12:30:42 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:16.589 12:30:42 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:16.589 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.589 12:30:42 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:16.589 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.589 12:30:42 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:16.589 12:30:42 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:16.589 12:30:42 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.589 12:30:42 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.589 12:30:42 accel.accel_decomp -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:16.589 12:30:42 accel.accel_decomp -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:16.589 12:30:42 accel.accel_decomp -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:16.589 12:30:42 accel.accel_decomp -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:16.589 12:30:42 accel.accel_decomp -- accel/accel.sh@49 -- # local IFS=, 00:06:16.589 12:30:42 accel.accel_decomp -- accel/accel.sh@50 -- # jq -r . 00:06:16.589 [2024-07-12 12:30:42.470137] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:16.589 [2024-07-12 12:30:42.470279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62037 ] 00:06:16.589 [2024-07-12 12:30:42.616975] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.847 [2024-07-12 12:30:42.762490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:16.847 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:16.848 12:30:42 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:18.222 12:30:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.223 12:30:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:18.223 12:30:44 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.223 00:06:18.223 real 0m1.587s 00:06:18.223 user 0m1.350s 00:06:18.223 sys 0m0.142s 00:06:18.223 ************************************ 00:06:18.223 END TEST accel_decomp 00:06:18.223 ************************************ 00:06:18.223 12:30:44 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.223 12:30:44 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:18.223 12:30:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.223 12:30:44 accel -- accel/accel.sh@132 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:18.223 12:30:44 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:18.223 12:30:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.223 12:30:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.223 ************************************ 00:06:18.223 START TEST accel_decomp_full 00:06:18.223 ************************************ 00:06:18.223 12:30:44 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:18.223 12:30:44 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:18.223 12:30:44 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:18.223 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.223 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.223 12:30:44 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:18.223 12:30:44 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:18.223 12:30:44 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:18.223 12:30:44 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.223 12:30:44 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.223 12:30:44 accel.accel_decomp_full -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:18.223 12:30:44 accel.accel_decomp_full -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:18.223 12:30:44 accel.accel_decomp_full -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:18.223 12:30:44 accel.accel_decomp_full -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:18.223 12:30:44 accel.accel_decomp_full -- accel/accel.sh@49 -- # local IFS=, 00:06:18.223 12:30:44 accel.accel_decomp_full -- accel/accel.sh@50 -- # jq -r . 00:06:18.223 [2024-07-12 12:30:44.099197] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:18.223 [2024-07-12 12:30:44.099331] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62072 ] 00:06:18.223 [2024-07-12 12:30:44.236323] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.479 [2024-07-12 12:30:44.382525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:18.479 12:30:44 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:19.884 12:30:45 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.884 00:06:19.884 real 0m1.591s 00:06:19.884 user 0m1.373s 00:06:19.884 sys 0m0.123s 00:06:19.884 12:30:45 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.884 12:30:45 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:19.884 ************************************ 00:06:19.884 END TEST accel_decomp_full 00:06:19.884 ************************************ 00:06:19.884 12:30:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.884 12:30:45 accel -- accel/accel.sh@133 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:19.884 12:30:45 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:19.884 12:30:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.884 12:30:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.884 ************************************ 00:06:19.884 START TEST accel_decomp_mcore 00:06:19.884 ************************************ 00:06:19.884 12:30:45 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:19.884 12:30:45 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:19.884 12:30:45 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:19.884 12:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:19.884 12:30:45 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:19.884 12:30:45 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:19.884 12:30:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:19.884 12:30:45 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:19.884 12:30:45 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.884 12:30:45 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.884 12:30:45 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:19.884 12:30:45 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:19.885 12:30:45 accel.accel_decomp_mcore -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:19.885 12:30:45 accel.accel_decomp_mcore -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:19.885 12:30:45 accel.accel_decomp_mcore -- accel/accel.sh@49 -- # local IFS=, 00:06:19.885 12:30:45 accel.accel_decomp_mcore -- accel/accel.sh@50 -- # jq -r . 00:06:19.885 [2024-07-12 12:30:45.740543] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:19.885 [2024-07-12 12:30:45.740638] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62106 ] 00:06:19.885 [2024-07-12 12:30:45.881292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.143 [2024-07-12 12:30:46.036291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.143 [2024-07-12 12:30:46.036346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.143 [2024-07-12 12:30:46.036451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.143 [2024-07-12 12:30:46.036452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:20.143 12:30:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.516 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.516 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.516 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.516 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.517 ************************************ 00:06:21.517 END TEST accel_decomp_mcore 00:06:21.517 ************************************ 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.517 00:06:21.517 real 0m1.603s 00:06:21.517 user 0m4.771s 00:06:21.517 sys 0m0.147s 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.517 12:30:47 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:21.517 12:30:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.517 12:30:47 accel -- accel/accel.sh@134 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:21.517 12:30:47 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:21.517 12:30:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.517 12:30:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.517 ************************************ 00:06:21.517 START TEST accel_decomp_full_mcore 00:06:21.517 ************************************ 00:06:21.517 12:30:47 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:21.517 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:21.517 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:21.517 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.517 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.517 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:21.517 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:21.517 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:21.517 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.517 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.517 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:21.517 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:21.517 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:21.517 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:21.517 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@49 -- # local IFS=, 00:06:21.517 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@50 -- # jq -r . 00:06:21.517 [2024-07-12 12:30:47.388302] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:21.517 [2024-07-12 12:30:47.388382] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62146 ] 00:06:21.517 [2024-07-12 12:30:47.522812] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.776 [2024-07-12 12:30:47.680307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.776 [2024-07-12 12:30:47.680477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.776 [2024-07-12 12:30:47.680761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.776 [2024-07-12 12:30:47.680609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:21.776 12:30:47 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.149 00:06:23.149 real 0m1.612s 00:06:23.149 user 0m4.813s 00:06:23.149 sys 0m0.151s 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:23.149 12:30:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:23.149 ************************************ 00:06:23.149 END TEST accel_decomp_full_mcore 00:06:23.149 ************************************ 00:06:23.149 12:30:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:23.149 12:30:49 accel -- accel/accel.sh@135 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:23.149 12:30:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:23.149 12:30:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.149 12:30:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:23.149 ************************************ 00:06:23.149 START TEST accel_decomp_mthread 00:06:23.149 ************************************ 00:06:23.149 12:30:49 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:23.149 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:23.149 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:23.149 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.149 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:23.149 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.149 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:23.149 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:23.149 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.149 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.149 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:23.149 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:23.149 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:23.149 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:23.149 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@49 -- # local IFS=, 00:06:23.149 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@50 -- # jq -r . 00:06:23.149 [2024-07-12 12:30:49.053012] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:23.149 [2024-07-12 12:30:49.053147] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62189 ] 00:06:23.149 [2024-07-12 12:30:49.197116] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.407 [2024-07-12 12:30:49.343006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.407 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:23.408 12:30:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.778 ************************************ 00:06:24.778 END TEST accel_decomp_mthread 00:06:24.778 ************************************ 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.778 00:06:24.778 real 0m1.585s 00:06:24.778 user 0m1.344s 00:06:24.778 sys 0m0.148s 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.778 12:30:50 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:24.778 12:30:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.778 12:30:50 accel -- accel/accel.sh@136 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:24.778 12:30:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:24.778 12:30:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.778 12:30:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.778 ************************************ 00:06:24.778 START TEST accel_decomp_full_mthread 00:06:24.778 ************************************ 00:06:24.778 12:30:50 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:24.778 12:30:50 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:24.778 12:30:50 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:24.778 12:30:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:24.778 12:30:50 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:24.778 12:30:50 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:24.778 12:30:50 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:24.778 12:30:50 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:24.778 12:30:50 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.778 12:30:50 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.778 12:30:50 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:24.778 12:30:50 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:24.778 12:30:50 accel.accel_decomp_full_mthread -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:24.778 12:30:50 accel.accel_decomp_full_mthread -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:24.778 12:30:50 accel.accel_decomp_full_mthread -- accel/accel.sh@49 -- # local IFS=, 00:06:24.778 12:30:50 accel.accel_decomp_full_mthread -- accel/accel.sh@50 -- # jq -r . 00:06:24.778 [2024-07-12 12:30:50.676758] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:24.778 [2024-07-12 12:30:50.676856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62222 ] 00:06:24.778 [2024-07-12 12:30:50.813259] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.035 [2024-07-12 12:30:50.968314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:25.036 12:30:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.409 00:06:26.409 real 0m1.604s 00:06:26.409 user 0m1.378s 00:06:26.409 sys 0m0.127s 00:06:26.409 ************************************ 00:06:26.409 END TEST accel_decomp_full_mthread 00:06:26.409 ************************************ 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.409 12:30:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:26.409 12:30:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.409 12:30:52 accel -- accel/accel.sh@138 -- # [[ n == y ]] 00:06:26.409 12:30:52 accel -- accel/accel.sh@150 -- # [[ 0 == 1 ]] 00:06:26.410 12:30:52 accel -- accel/accel.sh@177 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:26.410 12:30:52 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:26.410 12:30:52 accel -- accel/accel.sh@177 -- # build_accel_config 00:06:26.410 12:30:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.410 12:30:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.410 12:30:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.410 12:30:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.410 12:30:52 accel -- accel/accel.sh@40 -- # [[ '' != \k\e\r\n\e\l ]] 00:06:26.410 12:30:52 accel -- accel/accel.sh@41 -- # [[ 0 -gt 0 ]] 00:06:26.410 12:30:52 accel -- accel/accel.sh@43 -- # [[ 0 -gt 0 ]] 00:06:26.410 12:30:52 accel -- accel/accel.sh@45 -- # [[ -n '' ]] 00:06:26.410 12:30:52 accel -- accel/accel.sh@49 -- # local IFS=, 00:06:26.410 12:30:52 accel -- accel/accel.sh@50 -- # jq -r . 00:06:26.410 ************************************ 00:06:26.410 START TEST accel_dif_functional_tests 00:06:26.410 ************************************ 00:06:26.410 12:30:52 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:26.410 [2024-07-12 12:30:52.359057] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:26.410 [2024-07-12 12:30:52.359171] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62259 ] 00:06:26.668 [2024-07-12 12:30:52.496499] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.668 [2024-07-12 12:30:52.653909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.668 [2024-07-12 12:30:52.653994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.668 [2024-07-12 12:30:52.653999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.668 [2024-07-12 12:30:52.711024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:26.927 00:06:26.927 00:06:26.927 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.927 http://cunit.sourceforge.net/ 00:06:26.927 00:06:26.927 00:06:26.927 Suite: accel_dif 00:06:26.927 Test: verify: DIF generated, GUARD check ...passed 00:06:26.927 Test: verify: DIF generated, APPTAG check ...passed 00:06:26.927 Test: verify: DIF generated, REFTAG check ...passed 00:06:26.927 Test: verify: DIF not generated, GUARD check ...[2024-07-12 12:30:52.752922] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:26.927 passed 00:06:26.927 Test: verify: DIF not generated, APPTAG check ...passed 00:06:26.927 Test: verify: DIF not generated, REFTAG check ...passed 00:06:26.927 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:26.927 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 12:30:52.753047] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:26.927 [2024-07-12 12:30:52.753244] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:26.927 passed 00:06:26.927 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:26.927 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-07-12 12:30:52.753384] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:26.927 passed 00:06:26.927 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:26.927 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 12:30:52.753764] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:26.927 passed 00:06:26.927 Test: verify copy: DIF generated, GUARD check ...passed 00:06:26.927 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:26.927 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:26.927 Test: verify copy: DIF not generated, GUARD check ...passed 00:06:26.927 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 12:30:52.754241] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:26.927 passed 00:06:26.927 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 12:30:52.754324] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:26.927 [2024-07-12 12:30:52.754471] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:26.927 passed 00:06:26.927 Test: generate copy: DIF generated, GUARD check ...passed 00:06:26.927 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:26.927 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:26.927 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:26.927 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:26.927 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:26.927 Test: generate copy: iovecs-len validate ...[2024-07-12 12:30:52.755015] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:26.927 passed 00:06:26.927 Test: generate copy: buffer alignment validate ...passed 00:06:26.927 00:06:26.927 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.927 suites 1 1 n/a 0 0 00:06:26.927 tests 26 26 26 0 0 00:06:26.927 asserts 115 115 115 0 n/a 00:06:26.927 00:06:26.927 Elapsed time = 0.005 seconds 00:06:27.184 ************************************ 00:06:27.184 END TEST accel_dif_functional_tests 00:06:27.184 ************************************ 00:06:27.184 00:06:27.184 real 0m0.711s 00:06:27.184 user 0m0.903s 00:06:27.184 sys 0m0.172s 00:06:27.184 12:30:53 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.184 12:30:53 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:27.184 12:30:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:27.184 12:30:53 accel -- accel/accel.sh@178 -- # export PCI_ALLOWED= 00:06:27.184 12:30:53 accel -- accel/accel.sh@178 -- # PCI_ALLOWED= 00:06:27.184 00:06:27.184 real 0m36.555s 00:06:27.184 user 0m38.033s 00:06:27.184 sys 0m4.281s 00:06:27.184 12:30:53 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.184 ************************************ 00:06:27.184 12:30:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.184 END TEST accel 00:06:27.184 ************************************ 00:06:27.184 12:30:53 -- common/autotest_common.sh@1142 -- # return 0 00:06:27.184 12:30:53 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:27.184 12:30:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.184 12:30:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.184 12:30:53 -- common/autotest_common.sh@10 -- # set +x 00:06:27.184 ************************************ 00:06:27.184 START TEST accel_rpc 00:06:27.184 ************************************ 00:06:27.184 12:30:53 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:27.184 * Looking for test storage... 00:06:27.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:27.184 12:30:53 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:27.184 12:30:53 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=62329 00:06:27.184 12:30:53 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 62329 00:06:27.184 12:30:53 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:27.184 12:30:53 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 62329 ']' 00:06:27.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.184 12:30:53 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.184 12:30:53 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.184 12:30:53 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.184 12:30:53 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.184 12:30:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.184 [2024-07-12 12:30:53.232509] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:27.184 [2024-07-12 12:30:53.232918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62329 ] 00:06:27.443 [2024-07-12 12:30:53.366455] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.443 [2024-07-12 12:30:53.515512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.382 12:30:54 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.382 12:30:54 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:28.382 12:30:54 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:28.382 12:30:54 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:28.382 12:30:54 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:28.382 12:30:54 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:28.382 12:30:54 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:28.382 12:30:54 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.382 12:30:54 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.382 12:30:54 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.382 ************************************ 00:06:28.382 START TEST accel_assign_opcode 00:06:28.382 ************************************ 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:28.382 [2024-07-12 12:30:54.200085] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:28.382 [2024-07-12 12:30:54.212088] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:28.382 [2024-07-12 12:30:54.275718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:28.382 12:30:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:28.640 12:30:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.640 software 00:06:28.640 00:06:28.640 real 0m0.315s 00:06:28.640 user 0m0.060s 00:06:28.640 sys 0m0.011s 00:06:28.640 ************************************ 00:06:28.640 END TEST accel_assign_opcode 00:06:28.640 ************************************ 00:06:28.640 12:30:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.640 12:30:54 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:28.640 12:30:54 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:28.640 12:30:54 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 62329 00:06:28.640 12:30:54 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 62329 ']' 00:06:28.640 12:30:54 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 62329 00:06:28.640 12:30:54 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:28.640 12:30:54 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.640 12:30:54 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62329 00:06:28.640 killing process with pid 62329 00:06:28.640 12:30:54 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.640 12:30:54 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.640 12:30:54 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62329' 00:06:28.640 12:30:54 accel_rpc -- common/autotest_common.sh@967 -- # kill 62329 00:06:28.640 12:30:54 accel_rpc -- common/autotest_common.sh@972 -- # wait 62329 00:06:29.206 00:06:29.206 real 0m1.905s 00:06:29.206 user 0m1.991s 00:06:29.206 sys 0m0.431s 00:06:29.206 ************************************ 00:06:29.206 END TEST accel_rpc 00:06:29.206 ************************************ 00:06:29.206 12:30:55 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.206 12:30:55 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.206 12:30:55 -- common/autotest_common.sh@1142 -- # return 0 00:06:29.206 12:30:55 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:29.206 12:30:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.206 12:30:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.206 12:30:55 -- common/autotest_common.sh@10 -- # set +x 00:06:29.206 ************************************ 00:06:29.206 START TEST app_cmdline 00:06:29.206 ************************************ 00:06:29.206 12:30:55 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:29.206 * Looking for test storage... 00:06:29.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:29.206 12:30:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:29.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.206 12:30:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=62422 00:06:29.206 12:30:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:29.207 12:30:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 62422 00:06:29.207 12:30:55 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 62422 ']' 00:06:29.207 12:30:55 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.207 12:30:55 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.207 12:30:55 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.207 12:30:55 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.207 12:30:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.207 [2024-07-12 12:30:55.204348] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:29.207 [2024-07-12 12:30:55.204494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62422 ] 00:06:29.465 [2024-07-12 12:30:55.340540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.465 [2024-07-12 12:30:55.488695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.723 [2024-07-12 12:30:55.546137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:30.288 12:30:56 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.288 12:30:56 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:30.288 12:30:56 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:30.546 { 00:06:30.546 "version": "SPDK v24.09-pre git sha1 07d3b03c8", 00:06:30.546 "fields": { 00:06:30.546 "major": 24, 00:06:30.546 "minor": 9, 00:06:30.546 "patch": 0, 00:06:30.546 "suffix": "-pre", 00:06:30.546 "commit": "07d3b03c8" 00:06:30.546 } 00:06:30.546 } 00:06:30.546 12:30:56 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:30.546 12:30:56 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:30.546 12:30:56 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:30.546 12:30:56 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:30.546 12:30:56 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:30.546 12:30:56 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.546 12:30:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:30.546 12:30:56 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:30.546 12:30:56 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:30.546 12:30:56 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.546 12:30:56 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:30.546 12:30:56 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:30.546 12:30:56 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.546 12:30:56 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:30.546 12:30:56 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.546 12:30:56 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.546 12:30:56 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.546 12:30:56 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.546 12:30:56 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.546 12:30:56 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.546 12:30:56 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.546 12:30:56 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:30.546 12:30:56 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:30.546 12:30:56 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:30.804 request: 00:06:30.804 { 00:06:30.804 "method": "env_dpdk_get_mem_stats", 00:06:30.804 "req_id": 1 00:06:30.804 } 00:06:30.804 Got JSON-RPC error response 00:06:30.804 response: 00:06:30.804 { 00:06:30.804 "code": -32601, 00:06:30.804 "message": "Method not found" 00:06:30.804 } 00:06:30.804 12:30:56 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:30.804 12:30:56 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.804 12:30:56 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.804 12:30:56 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.804 12:30:56 app_cmdline -- app/cmdline.sh@1 -- # killprocess 62422 00:06:30.804 12:30:56 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 62422 ']' 00:06:30.804 12:30:56 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 62422 00:06:30.804 12:30:56 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:30.804 12:30:56 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.804 12:30:56 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 62422 00:06:30.804 killing process with pid 62422 00:06:30.804 12:30:56 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.804 12:30:56 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.804 12:30:56 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62422' 00:06:30.804 12:30:56 app_cmdline -- common/autotest_common.sh@967 -- # kill 62422 00:06:30.804 12:30:56 app_cmdline -- common/autotest_common.sh@972 -- # wait 62422 00:06:31.419 00:06:31.419 real 0m2.145s 00:06:31.419 user 0m2.653s 00:06:31.419 sys 0m0.472s 00:06:31.419 12:30:57 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.419 12:30:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.419 ************************************ 00:06:31.419 END TEST app_cmdline 00:06:31.419 ************************************ 00:06:31.419 12:30:57 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.419 12:30:57 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.419 12:30:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.419 12:30:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.419 12:30:57 -- common/autotest_common.sh@10 -- # set +x 00:06:31.419 ************************************ 00:06:31.419 START TEST version 00:06:31.419 ************************************ 00:06:31.419 12:30:57 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.419 * Looking for test storage... 00:06:31.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:31.419 12:30:57 version -- app/version.sh@17 -- # get_header_version major 00:06:31.419 12:30:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.419 12:30:57 version -- app/version.sh@14 -- # cut -f2 00:06:31.419 12:30:57 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.419 12:30:57 version -- app/version.sh@17 -- # major=24 00:06:31.419 12:30:57 version -- app/version.sh@18 -- # get_header_version minor 00:06:31.419 12:30:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.419 12:30:57 version -- app/version.sh@14 -- # cut -f2 00:06:31.419 12:30:57 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.419 12:30:57 version -- app/version.sh@18 -- # minor=9 00:06:31.419 12:30:57 version -- app/version.sh@19 -- # get_header_version patch 00:06:31.419 12:30:57 version -- app/version.sh@14 -- # cut -f2 00:06:31.419 12:30:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.419 12:30:57 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.419 12:30:57 version -- app/version.sh@19 -- # patch=0 00:06:31.419 12:30:57 version -- app/version.sh@20 -- # get_header_version suffix 00:06:31.419 12:30:57 version -- app/version.sh@14 -- # cut -f2 00:06:31.419 12:30:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.419 12:30:57 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.419 12:30:57 version -- app/version.sh@20 -- # suffix=-pre 00:06:31.419 12:30:57 version -- app/version.sh@22 -- # version=24.9 00:06:31.419 12:30:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:31.419 12:30:57 version -- app/version.sh@28 -- # version=24.9rc0 00:06:31.419 12:30:57 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:31.419 12:30:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:31.419 12:30:57 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:31.419 12:30:57 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:31.419 00:06:31.419 real 0m0.146s 00:06:31.419 user 0m0.078s 00:06:31.419 sys 0m0.099s 00:06:31.419 ************************************ 00:06:31.419 END TEST version 00:06:31.419 ************************************ 00:06:31.419 12:30:57 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.419 12:30:57 version -- common/autotest_common.sh@10 -- # set +x 00:06:31.419 12:30:57 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.419 12:30:57 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:31.419 12:30:57 -- spdk/autotest.sh@198 -- # uname -s 00:06:31.419 12:30:57 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:31.419 12:30:57 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:31.419 12:30:57 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:06:31.419 12:30:57 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:06:31.419 12:30:57 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:31.419 12:30:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.419 12:30:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.419 12:30:57 -- common/autotest_common.sh@10 -- # set +x 00:06:31.419 ************************************ 00:06:31.419 START TEST spdk_dd 00:06:31.419 ************************************ 00:06:31.419 12:30:57 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:06:31.677 * Looking for test storage... 00:06:31.677 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:31.677 12:30:57 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:31.677 12:30:57 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.677 12:30:57 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.677 12:30:57 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.678 12:30:57 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.678 12:30:57 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.678 12:30:57 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.678 12:30:57 spdk_dd -- paths/export.sh@5 -- # export PATH 00:06:31.678 12:30:57 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.678 12:30:57 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:31.937 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:31.937 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:31.937 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:06:31.937 12:30:57 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:06:31.937 12:30:57 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@230 -- # local class 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@232 -- # local progif 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@233 -- # class=01 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@15 -- # local i 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@24 -- # return 0 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@325 -- # (( 2 )) 00:06:31.937 12:30:57 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:31.937 12:30:57 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@139 -- # local lib so 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:06:31.937 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.1 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.1 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.1 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.1 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.9.1 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.938 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:06:31.939 * spdk_dd linked to liburing 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:31.939 12:30:57 spdk_dd -- common/build_config.sh@83 -- # CONFIG_URING=y 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:06:31.939 12:30:57 spdk_dd -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:06:31.939 12:30:58 spdk_dd -- dd/common.sh@156 -- # export liburing_in_use=1 00:06:31.939 12:30:58 spdk_dd -- dd/common.sh@156 -- # liburing_in_use=1 00:06:31.939 12:30:58 spdk_dd -- dd/common.sh@157 -- # return 0 00:06:31.939 12:30:58 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:06:31.939 12:30:58 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:31.939 12:30:58 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:31.939 12:30:58 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.939 12:30:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:31.939 ************************************ 00:06:31.939 START TEST spdk_dd_basic_rw 00:06:31.939 ************************************ 00:06:31.939 12:30:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:06:32.198 * Looking for test storage... 00:06:32.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:32.198 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:32.199 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:06:32.199 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:06:32.199 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:06:32.199 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:06:32.460 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:06:32.460 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:06:32.460 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:06:32.460 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:06:32.460 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:06:32.460 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:06:32.460 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:06:32.460 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:32.460 12:30:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:32.460 12:30:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.460 12:30:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.460 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:06:32.461 12:30:58 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:32.461 12:30:58 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:32.461 ************************************ 00:06:32.461 START TEST dd_bs_lt_native_bs 00:06:32.461 ************************************ 00:06:32.461 12:30:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:32.461 12:30:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:06:32.461 12:30:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:32.461 12:30:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.461 12:30:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.461 12:30:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.461 12:30:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.461 12:30:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.461 12:30:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:32.461 12:30:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:32.461 12:30:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:32.461 12:30:58 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:06:32.461 { 00:06:32.461 "subsystems": [ 00:06:32.461 { 00:06:32.461 "subsystem": "bdev", 00:06:32.461 "config": [ 00:06:32.461 { 00:06:32.461 "params": { 00:06:32.461 "trtype": "pcie", 00:06:32.461 "traddr": "0000:00:10.0", 00:06:32.461 "name": "Nvme0" 00:06:32.461 }, 00:06:32.461 "method": "bdev_nvme_attach_controller" 00:06:32.461 }, 00:06:32.461 { 00:06:32.461 "method": "bdev_wait_for_examine" 00:06:32.461 } 00:06:32.461 ] 00:06:32.461 } 00:06:32.461 ] 00:06:32.461 } 00:06:32.461 [2024-07-12 12:30:58.361967] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:32.461 [2024-07-12 12:30:58.362066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62743 ] 00:06:32.461 [2024-07-12 12:30:58.498680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.719 [2024-07-12 12:30:58.628183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.719 [2024-07-12 12:30:58.685361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:32.977 [2024-07-12 12:30:58.794799] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:06:32.977 [2024-07-12 12:30:58.794882] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:32.977 [2024-07-12 12:30:58.916730] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:32.977 ************************************ 00:06:32.977 END TEST dd_bs_lt_native_bs 00:06:32.977 ************************************ 00:06:32.977 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:06:32.977 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:32.977 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:06:32.977 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:06:32.977 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:06:32.977 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:32.977 00:06:32.977 real 0m0.712s 00:06:32.977 user 0m0.513s 00:06:32.977 sys 0m0.151s 00:06:32.977 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.977 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.235 ************************************ 00:06:33.235 START TEST dd_rw 00:06:33.235 ************************************ 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:33.235 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.802 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:06:33.802 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:33.802 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:33.802 12:30:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:33.802 [2024-07-12 12:30:59.759950] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:33.802 [2024-07-12 12:30:59.760277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62774 ] 00:06:33.802 { 00:06:33.802 "subsystems": [ 00:06:33.802 { 00:06:33.802 "subsystem": "bdev", 00:06:33.802 "config": [ 00:06:33.802 { 00:06:33.802 "params": { 00:06:33.802 "trtype": "pcie", 00:06:33.802 "traddr": "0000:00:10.0", 00:06:33.802 "name": "Nvme0" 00:06:33.802 }, 00:06:33.802 "method": "bdev_nvme_attach_controller" 00:06:33.802 }, 00:06:33.802 { 00:06:33.802 "method": "bdev_wait_for_examine" 00:06:33.802 } 00:06:33.802 ] 00:06:33.802 } 00:06:33.802 ] 00:06:33.802 } 00:06:34.060 [2024-07-12 12:30:59.895721] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.060 [2024-07-12 12:31:00.019601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.060 [2024-07-12 12:31:00.077089] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:34.578  Copying: 60/60 [kB] (average 19 MBps) 00:06:34.578 00:06:34.578 12:31:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:06:34.578 12:31:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:34.578 12:31:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:34.578 12:31:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:34.578 [2024-07-12 12:31:00.477111] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:34.578 [2024-07-12 12:31:00.477212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62793 ] 00:06:34.578 { 00:06:34.578 "subsystems": [ 00:06:34.578 { 00:06:34.578 "subsystem": "bdev", 00:06:34.578 "config": [ 00:06:34.578 { 00:06:34.578 "params": { 00:06:34.578 "trtype": "pcie", 00:06:34.578 "traddr": "0000:00:10.0", 00:06:34.578 "name": "Nvme0" 00:06:34.578 }, 00:06:34.578 "method": "bdev_nvme_attach_controller" 00:06:34.578 }, 00:06:34.578 { 00:06:34.578 "method": "bdev_wait_for_examine" 00:06:34.578 } 00:06:34.578 ] 00:06:34.578 } 00:06:34.578 ] 00:06:34.578 } 00:06:34.578 [2024-07-12 12:31:00.614239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.838 [2024-07-12 12:31:00.740114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.838 [2024-07-12 12:31:00.796036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.109  Copying: 60/60 [kB] (average 19 MBps) 00:06:35.109 00:06:35.109 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:35.109 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:35.109 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:35.109 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:35.109 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:35.109 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:35.109 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:35.109 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:35.109 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:35.109 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:35.109 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:35.109 { 00:06:35.109 "subsystems": [ 00:06:35.109 { 00:06:35.109 "subsystem": "bdev", 00:06:35.109 "config": [ 00:06:35.109 { 00:06:35.109 "params": { 00:06:35.109 "trtype": "pcie", 00:06:35.109 "traddr": "0000:00:10.0", 00:06:35.109 "name": "Nvme0" 00:06:35.109 }, 00:06:35.109 "method": "bdev_nvme_attach_controller" 00:06:35.109 }, 00:06:35.109 { 00:06:35.109 "method": "bdev_wait_for_examine" 00:06:35.109 } 00:06:35.109 ] 00:06:35.109 } 00:06:35.109 ] 00:06:35.109 } 00:06:35.367 [2024-07-12 12:31:01.198448] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:35.367 [2024-07-12 12:31:01.198587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62809 ] 00:06:35.367 [2024-07-12 12:31:01.348527] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.625 [2024-07-12 12:31:01.464297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.625 [2024-07-12 12:31:01.523022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:35.882  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:35.882 00:06:35.882 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:35.882 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:06:35.882 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:06:35.882 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:06:35.882 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:06:35.882 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:35.882 12:31:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.814 12:31:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:06:36.814 12:31:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:36.814 12:31:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:36.814 12:31:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:36.814 [2024-07-12 12:31:02.597346] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:36.814 [2024-07-12 12:31:02.597537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62833 ] 00:06:36.814 { 00:06:36.814 "subsystems": [ 00:06:36.814 { 00:06:36.814 "subsystem": "bdev", 00:06:36.814 "config": [ 00:06:36.814 { 00:06:36.814 "params": { 00:06:36.814 "trtype": "pcie", 00:06:36.814 "traddr": "0000:00:10.0", 00:06:36.814 "name": "Nvme0" 00:06:36.814 }, 00:06:36.814 "method": "bdev_nvme_attach_controller" 00:06:36.814 }, 00:06:36.814 { 00:06:36.814 "method": "bdev_wait_for_examine" 00:06:36.814 } 00:06:36.814 ] 00:06:36.814 } 00:06:36.814 ] 00:06:36.814 } 00:06:36.814 [2024-07-12 12:31:02.738334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.814 [2024-07-12 12:31:02.868511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.071 [2024-07-12 12:31:02.923743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:37.327  Copying: 60/60 [kB] (average 29 MBps) 00:06:37.327 00:06:37.327 12:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:06:37.327 12:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:37.327 12:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:37.327 12:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:37.327 [2024-07-12 12:31:03.308100] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:37.327 [2024-07-12 12:31:03.308204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62847 ] 00:06:37.327 { 00:06:37.327 "subsystems": [ 00:06:37.327 { 00:06:37.327 "subsystem": "bdev", 00:06:37.327 "config": [ 00:06:37.327 { 00:06:37.327 "params": { 00:06:37.327 "trtype": "pcie", 00:06:37.327 "traddr": "0000:00:10.0", 00:06:37.327 "name": "Nvme0" 00:06:37.327 }, 00:06:37.327 "method": "bdev_nvme_attach_controller" 00:06:37.327 }, 00:06:37.327 { 00:06:37.327 "method": "bdev_wait_for_examine" 00:06:37.327 } 00:06:37.327 ] 00:06:37.327 } 00:06:37.327 ] 00:06:37.327 } 00:06:37.585 [2024-07-12 12:31:03.439391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.585 [2024-07-12 12:31:03.553803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.585 [2024-07-12 12:31:03.606765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.117  Copying: 60/60 [kB] (average 58 MBps) 00:06:38.117 00:06:38.117 12:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:38.117 12:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:06:38.117 12:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:38.117 12:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:38.117 12:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:06:38.117 12:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:38.117 12:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:38.117 12:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:38.117 12:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:38.117 12:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:38.117 12:31:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:38.117 { 00:06:38.117 "subsystems": [ 00:06:38.117 { 00:06:38.117 "subsystem": "bdev", 00:06:38.117 "config": [ 00:06:38.117 { 00:06:38.117 "params": { 00:06:38.117 "trtype": "pcie", 00:06:38.117 "traddr": "0000:00:10.0", 00:06:38.117 "name": "Nvme0" 00:06:38.117 }, 00:06:38.117 "method": "bdev_nvme_attach_controller" 00:06:38.117 }, 00:06:38.117 { 00:06:38.117 "method": "bdev_wait_for_examine" 00:06:38.117 } 00:06:38.117 ] 00:06:38.117 } 00:06:38.117 ] 00:06:38.117 } 00:06:38.117 [2024-07-12 12:31:04.012611] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:38.117 [2024-07-12 12:31:04.012736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62862 ] 00:06:38.117 [2024-07-12 12:31:04.152204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.381 [2024-07-12 12:31:04.270303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.381 [2024-07-12 12:31:04.324362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:38.651  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:38.651 00:06:38.651 12:31:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:38.651 12:31:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:38.651 12:31:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:38.651 12:31:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:38.651 12:31:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:38.651 12:31:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:38.651 12:31:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:38.651 12:31:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.215 12:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:06:39.215 12:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:39.215 12:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:39.215 12:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.215 [2024-07-12 12:31:05.288530] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:39.215 [2024-07-12 12:31:05.288815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62887 ] 00:06:39.473 { 00:06:39.473 "subsystems": [ 00:06:39.473 { 00:06:39.473 "subsystem": "bdev", 00:06:39.473 "config": [ 00:06:39.473 { 00:06:39.473 "params": { 00:06:39.473 "trtype": "pcie", 00:06:39.473 "traddr": "0000:00:10.0", 00:06:39.473 "name": "Nvme0" 00:06:39.473 }, 00:06:39.473 "method": "bdev_nvme_attach_controller" 00:06:39.473 }, 00:06:39.473 { 00:06:39.473 "method": "bdev_wait_for_examine" 00:06:39.473 } 00:06:39.473 ] 00:06:39.473 } 00:06:39.473 ] 00:06:39.473 } 00:06:39.473 [2024-07-12 12:31:05.420170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.473 [2024-07-12 12:31:05.547736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.732 [2024-07-12 12:31:05.601547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:39.989  Copying: 56/56 [kB] (average 54 MBps) 00:06:39.990 00:06:39.990 12:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:06:39.990 12:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:39.990 12:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:39.990 12:31:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:39.990 { 00:06:39.990 "subsystems": [ 00:06:39.990 { 00:06:39.990 "subsystem": "bdev", 00:06:39.990 "config": [ 00:06:39.990 { 00:06:39.990 "params": { 00:06:39.990 "trtype": "pcie", 00:06:39.990 "traddr": "0000:00:10.0", 00:06:39.990 "name": "Nvme0" 00:06:39.990 }, 00:06:39.990 "method": "bdev_nvme_attach_controller" 00:06:39.990 }, 00:06:39.990 { 00:06:39.990 "method": "bdev_wait_for_examine" 00:06:39.990 } 00:06:39.990 ] 00:06:39.990 } 00:06:39.990 ] 00:06:39.990 } 00:06:39.990 [2024-07-12 12:31:05.991669] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:39.990 [2024-07-12 12:31:05.991929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62900 ] 00:06:40.247 [2024-07-12 12:31:06.129517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.247 [2024-07-12 12:31:06.248651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.247 [2024-07-12 12:31:06.302469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:40.764  Copying: 56/56 [kB] (average 27 MBps) 00:06:40.764 00:06:40.764 12:31:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:40.764 12:31:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:40.764 12:31:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:40.764 12:31:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:40.764 12:31:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:40.764 12:31:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:40.764 12:31:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:40.764 12:31:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:40.764 12:31:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:40.764 12:31:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:40.764 12:31:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:40.764 [2024-07-12 12:31:06.693903] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:40.764 [2024-07-12 12:31:06.694014] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62921 ] 00:06:40.764 { 00:06:40.764 "subsystems": [ 00:06:40.764 { 00:06:40.764 "subsystem": "bdev", 00:06:40.764 "config": [ 00:06:40.764 { 00:06:40.764 "params": { 00:06:40.764 "trtype": "pcie", 00:06:40.764 "traddr": "0000:00:10.0", 00:06:40.764 "name": "Nvme0" 00:06:40.764 }, 00:06:40.764 "method": "bdev_nvme_attach_controller" 00:06:40.764 }, 00:06:40.764 { 00:06:40.764 "method": "bdev_wait_for_examine" 00:06:40.764 } 00:06:40.764 ] 00:06:40.764 } 00:06:40.764 ] 00:06:40.764 } 00:06:40.764 [2024-07-12 12:31:06.833568] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.022 [2024-07-12 12:31:06.953037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.022 [2024-07-12 12:31:07.007472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:41.281  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:41.281 00:06:41.281 12:31:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:41.281 12:31:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:06:41.281 12:31:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:06:41.281 12:31:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:06:41.281 12:31:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:06:41.281 12:31:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:41.281 12:31:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:41.845 12:31:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:06:41.845 12:31:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:41.845 12:31:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:41.845 12:31:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.102 { 00:06:42.102 "subsystems": [ 00:06:42.102 { 00:06:42.102 "subsystem": "bdev", 00:06:42.102 "config": [ 00:06:42.102 { 00:06:42.102 "params": { 00:06:42.102 "trtype": "pcie", 00:06:42.102 "traddr": "0000:00:10.0", 00:06:42.102 "name": "Nvme0" 00:06:42.102 }, 00:06:42.102 "method": "bdev_nvme_attach_controller" 00:06:42.102 }, 00:06:42.102 { 00:06:42.102 "method": "bdev_wait_for_examine" 00:06:42.102 } 00:06:42.102 ] 00:06:42.102 } 00:06:42.102 ] 00:06:42.102 } 00:06:42.102 [2024-07-12 12:31:07.955796] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:42.102 [2024-07-12 12:31:07.956301] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62940 ] 00:06:42.102 [2024-07-12 12:31:08.101440] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.362 [2024-07-12 12:31:08.255114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.362 [2024-07-12 12:31:08.314366] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:42.620  Copying: 56/56 [kB] (average 54 MBps) 00:06:42.620 00:06:42.620 12:31:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:06:42.620 12:31:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:42.620 12:31:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:42.620 12:31:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:42.877 { 00:06:42.877 "subsystems": [ 00:06:42.877 { 00:06:42.877 "subsystem": "bdev", 00:06:42.877 "config": [ 00:06:42.877 { 00:06:42.877 "params": { 00:06:42.877 "trtype": "pcie", 00:06:42.877 "traddr": "0000:00:10.0", 00:06:42.877 "name": "Nvme0" 00:06:42.877 }, 00:06:42.877 "method": "bdev_nvme_attach_controller" 00:06:42.877 }, 00:06:42.877 { 00:06:42.877 "method": "bdev_wait_for_examine" 00:06:42.877 } 00:06:42.877 ] 00:06:42.877 } 00:06:42.877 ] 00:06:42.877 } 00:06:42.877 [2024-07-12 12:31:08.703758] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:42.877 [2024-07-12 12:31:08.703871] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62959 ] 00:06:42.877 [2024-07-12 12:31:08.842381] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.135 [2024-07-12 12:31:08.962779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.135 [2024-07-12 12:31:09.017206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:43.394  Copying: 56/56 [kB] (average 54 MBps) 00:06:43.394 00:06:43.394 12:31:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.394 12:31:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:06:43.394 12:31:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:43.394 12:31:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:43.394 12:31:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:06:43.394 12:31:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:43.394 12:31:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:43.394 12:31:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:43.394 12:31:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:43.394 12:31:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:43.394 12:31:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:43.394 { 00:06:43.394 "subsystems": [ 00:06:43.394 { 00:06:43.394 "subsystem": "bdev", 00:06:43.394 "config": [ 00:06:43.394 { 00:06:43.394 "params": { 00:06:43.394 "trtype": "pcie", 00:06:43.394 "traddr": "0000:00:10.0", 00:06:43.394 "name": "Nvme0" 00:06:43.394 }, 00:06:43.394 "method": "bdev_nvme_attach_controller" 00:06:43.394 }, 00:06:43.394 { 00:06:43.394 "method": "bdev_wait_for_examine" 00:06:43.394 } 00:06:43.394 ] 00:06:43.394 } 00:06:43.394 ] 00:06:43.394 } 00:06:43.394 [2024-07-12 12:31:09.421961] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:43.394 [2024-07-12 12:31:09.422060] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62975 ] 00:06:43.651 [2024-07-12 12:31:09.561604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.651 [2024-07-12 12:31:09.682492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.909 [2024-07-12 12:31:09.737846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:44.168  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:44.168 00:06:44.168 12:31:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:06:44.168 12:31:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:44.168 12:31:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:44.168 12:31:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:44.168 12:31:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:44.168 12:31:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:44.168 12:31:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:44.168 12:31:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:44.734 12:31:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:06:44.734 12:31:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:44.734 12:31:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:44.734 12:31:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:44.734 { 00:06:44.734 "subsystems": [ 00:06:44.734 { 00:06:44.734 "subsystem": "bdev", 00:06:44.734 "config": [ 00:06:44.734 { 00:06:44.734 "params": { 00:06:44.734 "trtype": "pcie", 00:06:44.734 "traddr": "0000:00:10.0", 00:06:44.734 "name": "Nvme0" 00:06:44.734 }, 00:06:44.734 "method": "bdev_nvme_attach_controller" 00:06:44.734 }, 00:06:44.734 { 00:06:44.734 "method": "bdev_wait_for_examine" 00:06:44.734 } 00:06:44.734 ] 00:06:44.734 } 00:06:44.734 ] 00:06:44.734 } 00:06:44.734 [2024-07-12 12:31:10.668678] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:44.734 [2024-07-12 12:31:10.669075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62994 ] 00:06:44.734 [2024-07-12 12:31:10.807371] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.992 [2024-07-12 12:31:10.929877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.992 [2024-07-12 12:31:10.984504] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:45.250  Copying: 48/48 [kB] (average 46 MBps) 00:06:45.250 00:06:45.507 12:31:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:06:45.507 12:31:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:45.507 12:31:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:45.507 12:31:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:45.507 [2024-07-12 12:31:11.367539] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:45.507 [2024-07-12 12:31:11.367658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63013 ] 00:06:45.507 { 00:06:45.507 "subsystems": [ 00:06:45.507 { 00:06:45.507 "subsystem": "bdev", 00:06:45.507 "config": [ 00:06:45.507 { 00:06:45.507 "params": { 00:06:45.507 "trtype": "pcie", 00:06:45.507 "traddr": "0000:00:10.0", 00:06:45.507 "name": "Nvme0" 00:06:45.507 }, 00:06:45.507 "method": "bdev_nvme_attach_controller" 00:06:45.507 }, 00:06:45.507 { 00:06:45.507 "method": "bdev_wait_for_examine" 00:06:45.507 } 00:06:45.507 ] 00:06:45.507 } 00:06:45.507 ] 00:06:45.507 } 00:06:45.507 [2024-07-12 12:31:11.501263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.765 [2024-07-12 12:31:11.620160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.765 [2024-07-12 12:31:11.674684] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.024  Copying: 48/48 [kB] (average 46 MBps) 00:06:46.024 00:06:46.024 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:46.024 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:46.024 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:46.024 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:46.024 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:46.024 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:46.024 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:46.024 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:46.024 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:46.024 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:46.024 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:46.024 [2024-07-12 12:31:12.058772] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:46.024 [2024-07-12 12:31:12.058875] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63028 ] 00:06:46.024 { 00:06:46.024 "subsystems": [ 00:06:46.024 { 00:06:46.024 "subsystem": "bdev", 00:06:46.024 "config": [ 00:06:46.024 { 00:06:46.024 "params": { 00:06:46.024 "trtype": "pcie", 00:06:46.024 "traddr": "0000:00:10.0", 00:06:46.024 "name": "Nvme0" 00:06:46.024 }, 00:06:46.024 "method": "bdev_nvme_attach_controller" 00:06:46.024 }, 00:06:46.024 { 00:06:46.024 "method": "bdev_wait_for_examine" 00:06:46.024 } 00:06:46.024 ] 00:06:46.024 } 00:06:46.024 ] 00:06:46.024 } 00:06:46.283 [2024-07-12 12:31:12.192229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.283 [2024-07-12 12:31:12.309586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.542 [2024-07-12 12:31:12.363083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:46.800  Copying: 1024/1024 [kB] (average 500 MBps) 00:06:46.800 00:06:46.800 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:06:46.800 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:06:46.800 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:06:46.800 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:06:46.800 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:06:46.800 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:06:46.800 12:31:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.367 12:31:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:06:47.367 12:31:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:06:47.367 12:31:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:47.367 12:31:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:47.367 [2024-07-12 12:31:13.289071] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:47.367 [2024-07-12 12:31:13.289178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63047 ] 00:06:47.367 { 00:06:47.367 "subsystems": [ 00:06:47.367 { 00:06:47.367 "subsystem": "bdev", 00:06:47.367 "config": [ 00:06:47.367 { 00:06:47.367 "params": { 00:06:47.367 "trtype": "pcie", 00:06:47.367 "traddr": "0000:00:10.0", 00:06:47.367 "name": "Nvme0" 00:06:47.367 }, 00:06:47.367 "method": "bdev_nvme_attach_controller" 00:06:47.367 }, 00:06:47.367 { 00:06:47.367 "method": "bdev_wait_for_examine" 00:06:47.367 } 00:06:47.367 ] 00:06:47.367 } 00:06:47.367 ] 00:06:47.367 } 00:06:47.367 [2024-07-12 12:31:13.423087] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.625 [2024-07-12 12:31:13.552529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.625 [2024-07-12 12:31:13.608370] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:47.884  Copying: 48/48 [kB] (average 46 MBps) 00:06:47.884 00:06:47.884 12:31:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:06:47.884 12:31:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:06:47.884 12:31:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:47.884 12:31:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:48.143 [2024-07-12 12:31:14.008163] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:48.143 [2024-07-12 12:31:14.008294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63066 ] 00:06:48.143 { 00:06:48.143 "subsystems": [ 00:06:48.143 { 00:06:48.143 "subsystem": "bdev", 00:06:48.143 "config": [ 00:06:48.143 { 00:06:48.143 "params": { 00:06:48.143 "trtype": "pcie", 00:06:48.143 "traddr": "0000:00:10.0", 00:06:48.143 "name": "Nvme0" 00:06:48.143 }, 00:06:48.143 "method": "bdev_nvme_attach_controller" 00:06:48.143 }, 00:06:48.143 { 00:06:48.143 "method": "bdev_wait_for_examine" 00:06:48.143 } 00:06:48.143 ] 00:06:48.143 } 00:06:48.143 ] 00:06:48.143 } 00:06:48.143 [2024-07-12 12:31:14.148959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.402 [2024-07-12 12:31:14.271161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.402 [2024-07-12 12:31:14.328154] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:48.660  Copying: 48/48 [kB] (average 46 MBps) 00:06:48.660 00:06:48.660 12:31:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:48.660 12:31:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:06:48.660 12:31:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:48.660 12:31:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:48.660 12:31:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:06:48.660 12:31:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:48.660 12:31:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:06:48.660 12:31:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:48.660 12:31:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:06:48.660 12:31:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:48.660 12:31:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:48.918 { 00:06:48.918 "subsystems": [ 00:06:48.918 { 00:06:48.918 "subsystem": "bdev", 00:06:48.918 "config": [ 00:06:48.918 { 00:06:48.918 "params": { 00:06:48.918 "trtype": "pcie", 00:06:48.918 "traddr": "0000:00:10.0", 00:06:48.918 "name": "Nvme0" 00:06:48.918 }, 00:06:48.918 "method": "bdev_nvme_attach_controller" 00:06:48.918 }, 00:06:48.918 { 00:06:48.918 "method": "bdev_wait_for_examine" 00:06:48.918 } 00:06:48.918 ] 00:06:48.918 } 00:06:48.918 ] 00:06:48.918 } 00:06:48.918 [2024-07-12 12:31:14.736789] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:48.918 [2024-07-12 12:31:14.737077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63082 ] 00:06:48.918 [2024-07-12 12:31:14.881760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.190 [2024-07-12 12:31:15.005361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.190 [2024-07-12 12:31:15.062295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:49.447  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:49.447 00:06:49.447 ************************************ 00:06:49.447 END TEST dd_rw 00:06:49.447 ************************************ 00:06:49.447 00:06:49.447 real 0m16.336s 00:06:49.447 user 0m12.184s 00:06:49.447 sys 0m5.615s 00:06:49.447 12:31:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.447 12:31:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.447 12:31:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:49.447 12:31:15 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:06:49.447 12:31:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:49.447 12:31:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.447 12:31:15 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:49.447 ************************************ 00:06:49.447 START TEST dd_rw_offset 00:06:49.447 ************************************ 00:06:49.447 12:31:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:06:49.447 12:31:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:06:49.447 12:31:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:06:49.447 12:31:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:06:49.447 12:31:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:49.447 12:31:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:06:49.448 12:31:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=fwqu4tgioi3lcxzizpvrnh4sqwpl9itb72pekeah3xjfwy78hd9nw3i6v9jvosjumponahxr63pbmum0zwgbqbtnm070ansu8plg5axx1vbczimgxbklmh9ptfkv81ugtre4ayaatb2u70bp11ca8mnqn0e07nqe7v79hnaxmywoiwphj2zazw6ddw8g73zba4wa45iki32nkg7qyg38njvmx4w82bia1s12pn5kmte0jnmt1znxi65g982ejjfc14r8ys0ayap0in6ztqtfvha0dba59erqtozmxincbi6l30s10trog4cxfrngn4ohnemkn0fvvcslp3gmz9y2y5pfh03rt23zrf55gnxmzlmo4h9qks6nj34p4in2uqo9z0pyi5b8l0z1qdyqk86fbj6bmdwawclh82ec5oc6u9v3cjs3b20avxvudskcqb1tqyg0gwisqb1znqun58nsvxnvglqzso2le7kr778ubbbvm0airkxmtm5onet0uwa9h3f7b6lpcy00w3xakue3pwkw79w7vrq4e287aa1sd2tbyhzb3hzjnngopc304ehrmroqw9l5bt38bttuj5f62p7ej8ca8pqhg98k94mou2xhypnilfv9yk2orho8ahgplagvk6hw6tlzgbwduw03ncyp8lao4yedngfd56ib2pclpgpxyo3gbm8qjhvpa045fniu4g24ubdvs1f6ddtgsfvz2dgb1co0vhord1arbjb9uxa4hdrg5nykkpkj6g9xgagznw5zrbrvw3034o5mgv259redgb23n3gnzq6mi6lm3279a0h5y3j5d7aa7tm1c1eyvoiz533cx9agblnpqvozj787ehnwcl11j04thoj17p0eqe4j3lif50y7h6zu9hlivnyim0pgy6xq524lnoulof0k29vb3jjh6mled8fjm1czxcczowvl7wcfexgi88vzt6v358fmnqtkhytlxnghtrjdm1nm6apwscjrrrdevzxugnk9blayt51j4un75tv5w9dpdgs2so99qt52j2igwfzq7avdnnbt4d6yqy9l0dzdqy1fxca22p6157afzzrdvarrpgzdb2q1cchu9ptzfyqi2mt94bijo8hju7iu0n1375ohazfdmuee0laukds06xv7niume9jh20p2ctjt0j3ql8dhrw2msv7fyx553mat38tz9ztsg9lfy8ngzrne475c7t0zv65w1j8ykm4q45jkw80w7rqf1trrde8rnehnx1pqld32depn6nnwd0vxexwbn3rqu3oiooqjc10yhkka7c3ujkrsjtacti4nknkupfruaoitgn9td49krpr8zuxwjk7tqtwgbwoo2an4tfprim5x90m0rme6py2wq77qqlwi6y8vre9ir9er3ki0m8pad0ys6w9z70tl6wpvh7kdca5njf2nkrly1leklcnklsys1nlwd1w2itifhm4xb8w6zazh41jruatqlrvm1sp9kv3mj3kwj7aa2hs3iv062fkql5h5dtm51dnxukmiri0pwcfw12f2gaxq5g6u0lexz90x3pq0lqy20wcuj1ymox2iqssplcn3vp5qo1nhadt095irigwl5mtbe2rt2w92yexztpkzhr92pqatolroq6lijta95hcni3bh4bj2ixp9vws0al7elzpylkz8ru3bf0pkkkv4j30icruw47niv76hjsx6wkrvade6263qq3fr7dqgvst9x7fm4ia90f4jj6qy5mci67zcx1aozing84sbs5qwdzs96gy6cxhb1dayb2qk1aaln9lphprv0vg8pymfrz6hwx8cuqivcnzhbepiuyfeagqus6xr4wrdmgeyg61ihzx6oam9lwwh509uauhckt1m9nnqcda7zjcvc521u6b2hew8xauswddrc6fpjgl7uedtd3ccgm1jjktqa3buwj06f380mxhvqk929dw9v72o4s0gj0rwghuxioea7s49p2mt9hna0ql4ek9md9gb7oi8eutzmooam05q4j0rrzgq7i6k71ccmsqh0otdfl4bp0a5uk8z9sfr7skfpuvuy8ikbizzm6m214zh94l9rjfq6maf4l2uj84acyxhyi5jqj8g0lwh9szxq6s0a3fz98ik3h1o8x5ddswm0fo6gde16b725hi6wkqahzzv7r16j1qdm36kneonalecbpel2gimybdbx0mz5tohi81rjlkre44wvibbdbofcvxaujeuic4t5ueiffwmtxtingldl4zsaju68bso0t7dc0c3d2zb5jimda3y0wjtr4sndovexr0xv0z8d2bdod0yavyelnobg24b5la5crcmcgajdnt4zrvq1g1sp7vz4qtrti5as8bq4l6drotimfwa45ydg7snz7aehtfr4wqdxshblgbuaochh6wwpw9jcrxudo6ocnmbogd4r7f37qlrerishdnulnnc3py6or26ajp6qov7kknldt1y2shvaq631332g52e5pj8e8cm831yhy3sb1ilj2tu5urn9iwb9o4iva4i60t747fwx4t792uj0pnaz0tx1swgc3nq9pcy50r1bsn6q4yqjy4nq42ut44ctdk7v7copvnmkzvlqrhjgu79e77nd79tx5yl52hw1r7g3o8orq6suia0m3hs16zvlk2gv9abuj56uy71y6bz4v2sk4r2crgw01u6cae6mz61c0pejwa2a5fserswlfmfjskhg3tkljgebz8kokhw606a27sa9w8yqg6ewnmrz9xu4zf4gjd8uxqarba8pojgwl99dksurjztozu06dicpwcicf4qs53z0pnw61tk4k9ta657ldzwlxu0qumrtmx0v74avk8ager2xihrby16831l6jo99av9n17q7cmw7uh94g1112nvhis1ozf25hewmtt0ylg01yoc4ayll1o99l7gbm77llx43cllomxbcb6ql7d1x2aknar835nr1dy0kgayb59stw6imqt6yeqsh21bnc149uq2k8gl1gmtp9wym0xjvp8lmt12c1ju1j8r5sygccafplosmsib7shsrfz3d25ypxy4lbpedauohaxx66t5i0nm552vsdgvwgwak8bhs2fys58d4vgfn3rju05sss8lyp0bw9hrm89ioybj7z5fofjnat4jrwzr5eny3pndh38bdft0f8t8q3h8xvlp5kol9nti72sc45pxs6fppfne3ms66o6on7c6cbn4vxixnx11urnq6mgnnkfitjhwntdfn4ion4zvs0u8jzzfihvgc7f2gjmz640jytsibmfkysgsvjacq8zp03lg0jplzprz2e4sljr8eigdhn3euvo91lk7l42gcc81mbc0valcyng7zcf8045qeik2mhwq4usk57t7r7x1prb6xa5m661ysz0o1nmpxh66fnzue6fnmomidh8noqzwrolherfxnh7f4u7zek3nx2upz7txxn7fcso6ebzobg23ofkvhq3jkhtrt7fjmgmovp3efztkqq1bbosstsfiocb5x3khlxvb1up11ccuytfmazm0fr771wl41fi8u1r3lv52h1t1gtdxs5enss56o9co7aop0jo3t12xs6xbqpvbtlzuz9opysywpi061dyc7uc2s0osu0bk5yl93w5y7dfloi024oo8foja32btd9kz27yqetu32wz0utbyvorrsojjfk8u1oggqiouhqukz96gfafql8e3ltc5eyse87sd90lwocam6uqi06w7d0d8r7hs1sjg15exnomzm1zkp7nl3uthjhnsrrdwbxpc0pjb95mxxif1f81rguc6gfqatjndav9l6j04qnd1uhcalt3ju280dqch0z7pyxrid45rgddxtf8fgdhlx03q75e0p389bs2byma1srjdf4nxcd5emw66gi4a1elkj1hq3ycfxpm0y2vw5xfs4lmo97uvbaatqmu96hr0rnlhjdy3wv0calga7lqb6ytge49eh46bl5xii5mt8dbxlboow61vge2s8rlxn265nenttiu437jcb9f867ttica4w8nf62eefzio5o7kvofjr4ii 00:06:49.448 12:31:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:06:49.448 12:31:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:06:49.448 12:31:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:49.448 12:31:15 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:49.705 [2024-07-12 12:31:15.549192] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:49.705 [2024-07-12 12:31:15.549312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63118 ] 00:06:49.705 { 00:06:49.705 "subsystems": [ 00:06:49.705 { 00:06:49.705 "subsystem": "bdev", 00:06:49.705 "config": [ 00:06:49.705 { 00:06:49.705 "params": { 00:06:49.705 "trtype": "pcie", 00:06:49.705 "traddr": "0000:00:10.0", 00:06:49.705 "name": "Nvme0" 00:06:49.705 }, 00:06:49.705 "method": "bdev_nvme_attach_controller" 00:06:49.705 }, 00:06:49.705 { 00:06:49.705 "method": "bdev_wait_for_examine" 00:06:49.705 } 00:06:49.705 ] 00:06:49.705 } 00:06:49.705 ] 00:06:49.705 } 00:06:49.705 [2024-07-12 12:31:15.681366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.964 [2024-07-12 12:31:15.803180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.964 [2024-07-12 12:31:15.858617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.222  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:50.222 00:06:50.222 12:31:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:06:50.222 12:31:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:06:50.222 12:31:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:06:50.222 12:31:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:50.222 { 00:06:50.222 "subsystems": [ 00:06:50.222 { 00:06:50.222 "subsystem": "bdev", 00:06:50.222 "config": [ 00:06:50.222 { 00:06:50.222 "params": { 00:06:50.222 "trtype": "pcie", 00:06:50.222 "traddr": "0000:00:10.0", 00:06:50.222 "name": "Nvme0" 00:06:50.222 }, 00:06:50.222 "method": "bdev_nvme_attach_controller" 00:06:50.222 }, 00:06:50.222 { 00:06:50.222 "method": "bdev_wait_for_examine" 00:06:50.222 } 00:06:50.222 ] 00:06:50.222 } 00:06:50.222 ] 00:06:50.222 } 00:06:50.222 [2024-07-12 12:31:16.257681] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:50.222 [2024-07-12 12:31:16.257817] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63131 ] 00:06:50.480 [2024-07-12 12:31:16.396598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.480 [2024-07-12 12:31:16.523894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.738 [2024-07-12 12:31:16.579627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:50.998  Copying: 4096/4096 [B] (average 4000 kBps) 00:06:50.998 00:06:50.998 12:31:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:50.998 ************************************ 00:06:50.998 END TEST dd_rw_offset 00:06:50.998 ************************************ 00:06:50.999 12:31:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ fwqu4tgioi3lcxzizpvrnh4sqwpl9itb72pekeah3xjfwy78hd9nw3i6v9jvosjumponahxr63pbmum0zwgbqbtnm070ansu8plg5axx1vbczimgxbklmh9ptfkv81ugtre4ayaatb2u70bp11ca8mnqn0e07nqe7v79hnaxmywoiwphj2zazw6ddw8g73zba4wa45iki32nkg7qyg38njvmx4w82bia1s12pn5kmte0jnmt1znxi65g982ejjfc14r8ys0ayap0in6ztqtfvha0dba59erqtozmxincbi6l30s10trog4cxfrngn4ohnemkn0fvvcslp3gmz9y2y5pfh03rt23zrf55gnxmzlmo4h9qks6nj34p4in2uqo9z0pyi5b8l0z1qdyqk86fbj6bmdwawclh82ec5oc6u9v3cjs3b20avxvudskcqb1tqyg0gwisqb1znqun58nsvxnvglqzso2le7kr778ubbbvm0airkxmtm5onet0uwa9h3f7b6lpcy00w3xakue3pwkw79w7vrq4e287aa1sd2tbyhzb3hzjnngopc304ehrmroqw9l5bt38bttuj5f62p7ej8ca8pqhg98k94mou2xhypnilfv9yk2orho8ahgplagvk6hw6tlzgbwduw03ncyp8lao4yedngfd56ib2pclpgpxyo3gbm8qjhvpa045fniu4g24ubdvs1f6ddtgsfvz2dgb1co0vhord1arbjb9uxa4hdrg5nykkpkj6g9xgagznw5zrbrvw3034o5mgv259redgb23n3gnzq6mi6lm3279a0h5y3j5d7aa7tm1c1eyvoiz533cx9agblnpqvozj787ehnwcl11j04thoj17p0eqe4j3lif50y7h6zu9hlivnyim0pgy6xq524lnoulof0k29vb3jjh6mled8fjm1czxcczowvl7wcfexgi88vzt6v358fmnqtkhytlxnghtrjdm1nm6apwscjrrrdevzxugnk9blayt51j4un75tv5w9dpdgs2so99qt52j2igwfzq7avdnnbt4d6yqy9l0dzdqy1fxca22p6157afzzrdvarrpgzdb2q1cchu9ptzfyqi2mt94bijo8hju7iu0n1375ohazfdmuee0laukds06xv7niume9jh20p2ctjt0j3ql8dhrw2msv7fyx553mat38tz9ztsg9lfy8ngzrne475c7t0zv65w1j8ykm4q45jkw80w7rqf1trrde8rnehnx1pqld32depn6nnwd0vxexwbn3rqu3oiooqjc10yhkka7c3ujkrsjtacti4nknkupfruaoitgn9td49krpr8zuxwjk7tqtwgbwoo2an4tfprim5x90m0rme6py2wq77qqlwi6y8vre9ir9er3ki0m8pad0ys6w9z70tl6wpvh7kdca5njf2nkrly1leklcnklsys1nlwd1w2itifhm4xb8w6zazh41jruatqlrvm1sp9kv3mj3kwj7aa2hs3iv062fkql5h5dtm51dnxukmiri0pwcfw12f2gaxq5g6u0lexz90x3pq0lqy20wcuj1ymox2iqssplcn3vp5qo1nhadt095irigwl5mtbe2rt2w92yexztpkzhr92pqatolroq6lijta95hcni3bh4bj2ixp9vws0al7elzpylkz8ru3bf0pkkkv4j30icruw47niv76hjsx6wkrvade6263qq3fr7dqgvst9x7fm4ia90f4jj6qy5mci67zcx1aozing84sbs5qwdzs96gy6cxhb1dayb2qk1aaln9lphprv0vg8pymfrz6hwx8cuqivcnzhbepiuyfeagqus6xr4wrdmgeyg61ihzx6oam9lwwh509uauhckt1m9nnqcda7zjcvc521u6b2hew8xauswddrc6fpjgl7uedtd3ccgm1jjktqa3buwj06f380mxhvqk929dw9v72o4s0gj0rwghuxioea7s49p2mt9hna0ql4ek9md9gb7oi8eutzmooam05q4j0rrzgq7i6k71ccmsqh0otdfl4bp0a5uk8z9sfr7skfpuvuy8ikbizzm6m214zh94l9rjfq6maf4l2uj84acyxhyi5jqj8g0lwh9szxq6s0a3fz98ik3h1o8x5ddswm0fo6gde16b725hi6wkqahzzv7r16j1qdm36kneonalecbpel2gimybdbx0mz5tohi81rjlkre44wvibbdbofcvxaujeuic4t5ueiffwmtxtingldl4zsaju68bso0t7dc0c3d2zb5jimda3y0wjtr4sndovexr0xv0z8d2bdod0yavyelnobg24b5la5crcmcgajdnt4zrvq1g1sp7vz4qtrti5as8bq4l6drotimfwa45ydg7snz7aehtfr4wqdxshblgbuaochh6wwpw9jcrxudo6ocnmbogd4r7f37qlrerishdnulnnc3py6or26ajp6qov7kknldt1y2shvaq631332g52e5pj8e8cm831yhy3sb1ilj2tu5urn9iwb9o4iva4i60t747fwx4t792uj0pnaz0tx1swgc3nq9pcy50r1bsn6q4yqjy4nq42ut44ctdk7v7copvnmkzvlqrhjgu79e77nd79tx5yl52hw1r7g3o8orq6suia0m3hs16zvlk2gv9abuj56uy71y6bz4v2sk4r2crgw01u6cae6mz61c0pejwa2a5fserswlfmfjskhg3tkljgebz8kokhw606a27sa9w8yqg6ewnmrz9xu4zf4gjd8uxqarba8pojgwl99dksurjztozu06dicpwcicf4qs53z0pnw61tk4k9ta657ldzwlxu0qumrtmx0v74avk8ager2xihrby16831l6jo99av9n17q7cmw7uh94g1112nvhis1ozf25hewmtt0ylg01yoc4ayll1o99l7gbm77llx43cllomxbcb6ql7d1x2aknar835nr1dy0kgayb59stw6imqt6yeqsh21bnc149uq2k8gl1gmtp9wym0xjvp8lmt12c1ju1j8r5sygccafplosmsib7shsrfz3d25ypxy4lbpedauohaxx66t5i0nm552vsdgvwgwak8bhs2fys58d4vgfn3rju05sss8lyp0bw9hrm89ioybj7z5fofjnat4jrwzr5eny3pndh38bdft0f8t8q3h8xvlp5kol9nti72sc45pxs6fppfne3ms66o6on7c6cbn4vxixnx11urnq6mgnnkfitjhwntdfn4ion4zvs0u8jzzfihvgc7f2gjmz640jytsibmfkysgsvjacq8zp03lg0jplzprz2e4sljr8eigdhn3euvo91lk7l42gcc81mbc0valcyng7zcf8045qeik2mhwq4usk57t7r7x1prb6xa5m661ysz0o1nmpxh66fnzue6fnmomidh8noqzwrolherfxnh7f4u7zek3nx2upz7txxn7fcso6ebzobg23ofkvhq3jkhtrt7fjmgmovp3efztkqq1bbosstsfiocb5x3khlxvb1up11ccuytfmazm0fr771wl41fi8u1r3lv52h1t1gtdxs5enss56o9co7aop0jo3t12xs6xbqpvbtlzuz9opysywpi061dyc7uc2s0osu0bk5yl93w5y7dfloi024oo8foja32btd9kz27yqetu32wz0utbyvorrsojjfk8u1oggqiouhqukz96gfafql8e3ltc5eyse87sd90lwocam6uqi06w7d0d8r7hs1sjg15exnomzm1zkp7nl3uthjhnsrrdwbxpc0pjb95mxxif1f81rguc6gfqatjndav9l6j04qnd1uhcalt3ju280dqch0z7pyxrid45rgddxtf8fgdhlx03q75e0p389bs2byma1srjdf4nxcd5emw66gi4a1elkj1hq3ycfxpm0y2vw5xfs4lmo97uvbaatqmu96hr0rnlhjdy3wv0calga7lqb6ytge49eh46bl5xii5mt8dbxlboow61vge2s8rlxn265nenttiu437jcb9f867ttica4w8nf62eefzio5o7kvofjr4ii == \f\w\q\u\4\t\g\i\o\i\3\l\c\x\z\i\z\p\v\r\n\h\4\s\q\w\p\l\9\i\t\b\7\2\p\e\k\e\a\h\3\x\j\f\w\y\7\8\h\d\9\n\w\3\i\6\v\9\j\v\o\s\j\u\m\p\o\n\a\h\x\r\6\3\p\b\m\u\m\0\z\w\g\b\q\b\t\n\m\0\7\0\a\n\s\u\8\p\l\g\5\a\x\x\1\v\b\c\z\i\m\g\x\b\k\l\m\h\9\p\t\f\k\v\8\1\u\g\t\r\e\4\a\y\a\a\t\b\2\u\7\0\b\p\1\1\c\a\8\m\n\q\n\0\e\0\7\n\q\e\7\v\7\9\h\n\a\x\m\y\w\o\i\w\p\h\j\2\z\a\z\w\6\d\d\w\8\g\7\3\z\b\a\4\w\a\4\5\i\k\i\3\2\n\k\g\7\q\y\g\3\8\n\j\v\m\x\4\w\8\2\b\i\a\1\s\1\2\p\n\5\k\m\t\e\0\j\n\m\t\1\z\n\x\i\6\5\g\9\8\2\e\j\j\f\c\1\4\r\8\y\s\0\a\y\a\p\0\i\n\6\z\t\q\t\f\v\h\a\0\d\b\a\5\9\e\r\q\t\o\z\m\x\i\n\c\b\i\6\l\3\0\s\1\0\t\r\o\g\4\c\x\f\r\n\g\n\4\o\h\n\e\m\k\n\0\f\v\v\c\s\l\p\3\g\m\z\9\y\2\y\5\p\f\h\0\3\r\t\2\3\z\r\f\5\5\g\n\x\m\z\l\m\o\4\h\9\q\k\s\6\n\j\3\4\p\4\i\n\2\u\q\o\9\z\0\p\y\i\5\b\8\l\0\z\1\q\d\y\q\k\8\6\f\b\j\6\b\m\d\w\a\w\c\l\h\8\2\e\c\5\o\c\6\u\9\v\3\c\j\s\3\b\2\0\a\v\x\v\u\d\s\k\c\q\b\1\t\q\y\g\0\g\w\i\s\q\b\1\z\n\q\u\n\5\8\n\s\v\x\n\v\g\l\q\z\s\o\2\l\e\7\k\r\7\7\8\u\b\b\b\v\m\0\a\i\r\k\x\m\t\m\5\o\n\e\t\0\u\w\a\9\h\3\f\7\b\6\l\p\c\y\0\0\w\3\x\a\k\u\e\3\p\w\k\w\7\9\w\7\v\r\q\4\e\2\8\7\a\a\1\s\d\2\t\b\y\h\z\b\3\h\z\j\n\n\g\o\p\c\3\0\4\e\h\r\m\r\o\q\w\9\l\5\b\t\3\8\b\t\t\u\j\5\f\6\2\p\7\e\j\8\c\a\8\p\q\h\g\9\8\k\9\4\m\o\u\2\x\h\y\p\n\i\l\f\v\9\y\k\2\o\r\h\o\8\a\h\g\p\l\a\g\v\k\6\h\w\6\t\l\z\g\b\w\d\u\w\0\3\n\c\y\p\8\l\a\o\4\y\e\d\n\g\f\d\5\6\i\b\2\p\c\l\p\g\p\x\y\o\3\g\b\m\8\q\j\h\v\p\a\0\4\5\f\n\i\u\4\g\2\4\u\b\d\v\s\1\f\6\d\d\t\g\s\f\v\z\2\d\g\b\1\c\o\0\v\h\o\r\d\1\a\r\b\j\b\9\u\x\a\4\h\d\r\g\5\n\y\k\k\p\k\j\6\g\9\x\g\a\g\z\n\w\5\z\r\b\r\v\w\3\0\3\4\o\5\m\g\v\2\5\9\r\e\d\g\b\2\3\n\3\g\n\z\q\6\m\i\6\l\m\3\2\7\9\a\0\h\5\y\3\j\5\d\7\a\a\7\t\m\1\c\1\e\y\v\o\i\z\5\3\3\c\x\9\a\g\b\l\n\p\q\v\o\z\j\7\8\7\e\h\n\w\c\l\1\1\j\0\4\t\h\o\j\1\7\p\0\e\q\e\4\j\3\l\i\f\5\0\y\7\h\6\z\u\9\h\l\i\v\n\y\i\m\0\p\g\y\6\x\q\5\2\4\l\n\o\u\l\o\f\0\k\2\9\v\b\3\j\j\h\6\m\l\e\d\8\f\j\m\1\c\z\x\c\c\z\o\w\v\l\7\w\c\f\e\x\g\i\8\8\v\z\t\6\v\3\5\8\f\m\n\q\t\k\h\y\t\l\x\n\g\h\t\r\j\d\m\1\n\m\6\a\p\w\s\c\j\r\r\r\d\e\v\z\x\u\g\n\k\9\b\l\a\y\t\5\1\j\4\u\n\7\5\t\v\5\w\9\d\p\d\g\s\2\s\o\9\9\q\t\5\2\j\2\i\g\w\f\z\q\7\a\v\d\n\n\b\t\4\d\6\y\q\y\9\l\0\d\z\d\q\y\1\f\x\c\a\2\2\p\6\1\5\7\a\f\z\z\r\d\v\a\r\r\p\g\z\d\b\2\q\1\c\c\h\u\9\p\t\z\f\y\q\i\2\m\t\9\4\b\i\j\o\8\h\j\u\7\i\u\0\n\1\3\7\5\o\h\a\z\f\d\m\u\e\e\0\l\a\u\k\d\s\0\6\x\v\7\n\i\u\m\e\9\j\h\2\0\p\2\c\t\j\t\0\j\3\q\l\8\d\h\r\w\2\m\s\v\7\f\y\x\5\5\3\m\a\t\3\8\t\z\9\z\t\s\g\9\l\f\y\8\n\g\z\r\n\e\4\7\5\c\7\t\0\z\v\6\5\w\1\j\8\y\k\m\4\q\4\5\j\k\w\8\0\w\7\r\q\f\1\t\r\r\d\e\8\r\n\e\h\n\x\1\p\q\l\d\3\2\d\e\p\n\6\n\n\w\d\0\v\x\e\x\w\b\n\3\r\q\u\3\o\i\o\o\q\j\c\1\0\y\h\k\k\a\7\c\3\u\j\k\r\s\j\t\a\c\t\i\4\n\k\n\k\u\p\f\r\u\a\o\i\t\g\n\9\t\d\4\9\k\r\p\r\8\z\u\x\w\j\k\7\t\q\t\w\g\b\w\o\o\2\a\n\4\t\f\p\r\i\m\5\x\9\0\m\0\r\m\e\6\p\y\2\w\q\7\7\q\q\l\w\i\6\y\8\v\r\e\9\i\r\9\e\r\3\k\i\0\m\8\p\a\d\0\y\s\6\w\9\z\7\0\t\l\6\w\p\v\h\7\k\d\c\a\5\n\j\f\2\n\k\r\l\y\1\l\e\k\l\c\n\k\l\s\y\s\1\n\l\w\d\1\w\2\i\t\i\f\h\m\4\x\b\8\w\6\z\a\z\h\4\1\j\r\u\a\t\q\l\r\v\m\1\s\p\9\k\v\3\m\j\3\k\w\j\7\a\a\2\h\s\3\i\v\0\6\2\f\k\q\l\5\h\5\d\t\m\5\1\d\n\x\u\k\m\i\r\i\0\p\w\c\f\w\1\2\f\2\g\a\x\q\5\g\6\u\0\l\e\x\z\9\0\x\3\p\q\0\l\q\y\2\0\w\c\u\j\1\y\m\o\x\2\i\q\s\s\p\l\c\n\3\v\p\5\q\o\1\n\h\a\d\t\0\9\5\i\r\i\g\w\l\5\m\t\b\e\2\r\t\2\w\9\2\y\e\x\z\t\p\k\z\h\r\9\2\p\q\a\t\o\l\r\o\q\6\l\i\j\t\a\9\5\h\c\n\i\3\b\h\4\b\j\2\i\x\p\9\v\w\s\0\a\l\7\e\l\z\p\y\l\k\z\8\r\u\3\b\f\0\p\k\k\k\v\4\j\3\0\i\c\r\u\w\4\7\n\i\v\7\6\h\j\s\x\6\w\k\r\v\a\d\e\6\2\6\3\q\q\3\f\r\7\d\q\g\v\s\t\9\x\7\f\m\4\i\a\9\0\f\4\j\j\6\q\y\5\m\c\i\6\7\z\c\x\1\a\o\z\i\n\g\8\4\s\b\s\5\q\w\d\z\s\9\6\g\y\6\c\x\h\b\1\d\a\y\b\2\q\k\1\a\a\l\n\9\l\p\h\p\r\v\0\v\g\8\p\y\m\f\r\z\6\h\w\x\8\c\u\q\i\v\c\n\z\h\b\e\p\i\u\y\f\e\a\g\q\u\s\6\x\r\4\w\r\d\m\g\e\y\g\6\1\i\h\z\x\6\o\a\m\9\l\w\w\h\5\0\9\u\a\u\h\c\k\t\1\m\9\n\n\q\c\d\a\7\z\j\c\v\c\5\2\1\u\6\b\2\h\e\w\8\x\a\u\s\w\d\d\r\c\6\f\p\j\g\l\7\u\e\d\t\d\3\c\c\g\m\1\j\j\k\t\q\a\3\b\u\w\j\0\6\f\3\8\0\m\x\h\v\q\k\9\2\9\d\w\9\v\7\2\o\4\s\0\g\j\0\r\w\g\h\u\x\i\o\e\a\7\s\4\9\p\2\m\t\9\h\n\a\0\q\l\4\e\k\9\m\d\9\g\b\7\o\i\8\e\u\t\z\m\o\o\a\m\0\5\q\4\j\0\r\r\z\g\q\7\i\6\k\7\1\c\c\m\s\q\h\0\o\t\d\f\l\4\b\p\0\a\5\u\k\8\z\9\s\f\r\7\s\k\f\p\u\v\u\y\8\i\k\b\i\z\z\m\6\m\2\1\4\z\h\9\4\l\9\r\j\f\q\6\m\a\f\4\l\2\u\j\8\4\a\c\y\x\h\y\i\5\j\q\j\8\g\0\l\w\h\9\s\z\x\q\6\s\0\a\3\f\z\9\8\i\k\3\h\1\o\8\x\5\d\d\s\w\m\0\f\o\6\g\d\e\1\6\b\7\2\5\h\i\6\w\k\q\a\h\z\z\v\7\r\1\6\j\1\q\d\m\3\6\k\n\e\o\n\a\l\e\c\b\p\e\l\2\g\i\m\y\b\d\b\x\0\m\z\5\t\o\h\i\8\1\r\j\l\k\r\e\4\4\w\v\i\b\b\d\b\o\f\c\v\x\a\u\j\e\u\i\c\4\t\5\u\e\i\f\f\w\m\t\x\t\i\n\g\l\d\l\4\z\s\a\j\u\6\8\b\s\o\0\t\7\d\c\0\c\3\d\2\z\b\5\j\i\m\d\a\3\y\0\w\j\t\r\4\s\n\d\o\v\e\x\r\0\x\v\0\z\8\d\2\b\d\o\d\0\y\a\v\y\e\l\n\o\b\g\2\4\b\5\l\a\5\c\r\c\m\c\g\a\j\d\n\t\4\z\r\v\q\1\g\1\s\p\7\v\z\4\q\t\r\t\i\5\a\s\8\b\q\4\l\6\d\r\o\t\i\m\f\w\a\4\5\y\d\g\7\s\n\z\7\a\e\h\t\f\r\4\w\q\d\x\s\h\b\l\g\b\u\a\o\c\h\h\6\w\w\p\w\9\j\c\r\x\u\d\o\6\o\c\n\m\b\o\g\d\4\r\7\f\3\7\q\l\r\e\r\i\s\h\d\n\u\l\n\n\c\3\p\y\6\o\r\2\6\a\j\p\6\q\o\v\7\k\k\n\l\d\t\1\y\2\s\h\v\a\q\6\3\1\3\3\2\g\5\2\e\5\p\j\8\e\8\c\m\8\3\1\y\h\y\3\s\b\1\i\l\j\2\t\u\5\u\r\n\9\i\w\b\9\o\4\i\v\a\4\i\6\0\t\7\4\7\f\w\x\4\t\7\9\2\u\j\0\p\n\a\z\0\t\x\1\s\w\g\c\3\n\q\9\p\c\y\5\0\r\1\b\s\n\6\q\4\y\q\j\y\4\n\q\4\2\u\t\4\4\c\t\d\k\7\v\7\c\o\p\v\n\m\k\z\v\l\q\r\h\j\g\u\7\9\e\7\7\n\d\7\9\t\x\5\y\l\5\2\h\w\1\r\7\g\3\o\8\o\r\q\6\s\u\i\a\0\m\3\h\s\1\6\z\v\l\k\2\g\v\9\a\b\u\j\5\6\u\y\7\1\y\6\b\z\4\v\2\s\k\4\r\2\c\r\g\w\0\1\u\6\c\a\e\6\m\z\6\1\c\0\p\e\j\w\a\2\a\5\f\s\e\r\s\w\l\f\m\f\j\s\k\h\g\3\t\k\l\j\g\e\b\z\8\k\o\k\h\w\6\0\6\a\2\7\s\a\9\w\8\y\q\g\6\e\w\n\m\r\z\9\x\u\4\z\f\4\g\j\d\8\u\x\q\a\r\b\a\8\p\o\j\g\w\l\9\9\d\k\s\u\r\j\z\t\o\z\u\0\6\d\i\c\p\w\c\i\c\f\4\q\s\5\3\z\0\p\n\w\6\1\t\k\4\k\9\t\a\6\5\7\l\d\z\w\l\x\u\0\q\u\m\r\t\m\x\0\v\7\4\a\v\k\8\a\g\e\r\2\x\i\h\r\b\y\1\6\8\3\1\l\6\j\o\9\9\a\v\9\n\1\7\q\7\c\m\w\7\u\h\9\4\g\1\1\1\2\n\v\h\i\s\1\o\z\f\2\5\h\e\w\m\t\t\0\y\l\g\0\1\y\o\c\4\a\y\l\l\1\o\9\9\l\7\g\b\m\7\7\l\l\x\4\3\c\l\l\o\m\x\b\c\b\6\q\l\7\d\1\x\2\a\k\n\a\r\8\3\5\n\r\1\d\y\0\k\g\a\y\b\5\9\s\t\w\6\i\m\q\t\6\y\e\q\s\h\2\1\b\n\c\1\4\9\u\q\2\k\8\g\l\1\g\m\t\p\9\w\y\m\0\x\j\v\p\8\l\m\t\1\2\c\1\j\u\1\j\8\r\5\s\y\g\c\c\a\f\p\l\o\s\m\s\i\b\7\s\h\s\r\f\z\3\d\2\5\y\p\x\y\4\l\b\p\e\d\a\u\o\h\a\x\x\6\6\t\5\i\0\n\m\5\5\2\v\s\d\g\v\w\g\w\a\k\8\b\h\s\2\f\y\s\5\8\d\4\v\g\f\n\3\r\j\u\0\5\s\s\s\8\l\y\p\0\b\w\9\h\r\m\8\9\i\o\y\b\j\7\z\5\f\o\f\j\n\a\t\4\j\r\w\z\r\5\e\n\y\3\p\n\d\h\3\8\b\d\f\t\0\f\8\t\8\q\3\h\8\x\v\l\p\5\k\o\l\9\n\t\i\7\2\s\c\4\5\p\x\s\6\f\p\p\f\n\e\3\m\s\6\6\o\6\o\n\7\c\6\c\b\n\4\v\x\i\x\n\x\1\1\u\r\n\q\6\m\g\n\n\k\f\i\t\j\h\w\n\t\d\f\n\4\i\o\n\4\z\v\s\0\u\8\j\z\z\f\i\h\v\g\c\7\f\2\g\j\m\z\6\4\0\j\y\t\s\i\b\m\f\k\y\s\g\s\v\j\a\c\q\8\z\p\0\3\l\g\0\j\p\l\z\p\r\z\2\e\4\s\l\j\r\8\e\i\g\d\h\n\3\e\u\v\o\9\1\l\k\7\l\4\2\g\c\c\8\1\m\b\c\0\v\a\l\c\y\n\g\7\z\c\f\8\0\4\5\q\e\i\k\2\m\h\w\q\4\u\s\k\5\7\t\7\r\7\x\1\p\r\b\6\x\a\5\m\6\6\1\y\s\z\0\o\1\n\m\p\x\h\6\6\f\n\z\u\e\6\f\n\m\o\m\i\d\h\8\n\o\q\z\w\r\o\l\h\e\r\f\x\n\h\7\f\4\u\7\z\e\k\3\n\x\2\u\p\z\7\t\x\x\n\7\f\c\s\o\6\e\b\z\o\b\g\2\3\o\f\k\v\h\q\3\j\k\h\t\r\t\7\f\j\m\g\m\o\v\p\3\e\f\z\t\k\q\q\1\b\b\o\s\s\t\s\f\i\o\c\b\5\x\3\k\h\l\x\v\b\1\u\p\1\1\c\c\u\y\t\f\m\a\z\m\0\f\r\7\7\1\w\l\4\1\f\i\8\u\1\r\3\l\v\5\2\h\1\t\1\g\t\d\x\s\5\e\n\s\s\5\6\o\9\c\o\7\a\o\p\0\j\o\3\t\1\2\x\s\6\x\b\q\p\v\b\t\l\z\u\z\9\o\p\y\s\y\w\p\i\0\6\1\d\y\c\7\u\c\2\s\0\o\s\u\0\b\k\5\y\l\9\3\w\5\y\7\d\f\l\o\i\0\2\4\o\o\8\f\o\j\a\3\2\b\t\d\9\k\z\2\7\y\q\e\t\u\3\2\w\z\0\u\t\b\y\v\o\r\r\s\o\j\j\f\k\8\u\1\o\g\g\q\i\o\u\h\q\u\k\z\9\6\g\f\a\f\q\l\8\e\3\l\t\c\5\e\y\s\e\8\7\s\d\9\0\l\w\o\c\a\m\6\u\q\i\0\6\w\7\d\0\d\8\r\7\h\s\1\s\j\g\1\5\e\x\n\o\m\z\m\1\z\k\p\7\n\l\3\u\t\h\j\h\n\s\r\r\d\w\b\x\p\c\0\p\j\b\9\5\m\x\x\i\f\1\f\8\1\r\g\u\c\6\g\f\q\a\t\j\n\d\a\v\9\l\6\j\0\4\q\n\d\1\u\h\c\a\l\t\3\j\u\2\8\0\d\q\c\h\0\z\7\p\y\x\r\i\d\4\5\r\g\d\d\x\t\f\8\f\g\d\h\l\x\0\3\q\7\5\e\0\p\3\8\9\b\s\2\b\y\m\a\1\s\r\j\d\f\4\n\x\c\d\5\e\m\w\6\6\g\i\4\a\1\e\l\k\j\1\h\q\3\y\c\f\x\p\m\0\y\2\v\w\5\x\f\s\4\l\m\o\9\7\u\v\b\a\a\t\q\m\u\9\6\h\r\0\r\n\l\h\j\d\y\3\w\v\0\c\a\l\g\a\7\l\q\b\6\y\t\g\e\4\9\e\h\4\6\b\l\5\x\i\i\5\m\t\8\d\b\x\l\b\o\o\w\6\1\v\g\e\2\s\8\r\l\x\n\2\6\5\n\e\n\t\t\i\u\4\3\7\j\c\b\9\f\8\6\7\t\t\i\c\a\4\w\8\n\f\6\2\e\e\f\z\i\o\5\o\7\k\v\o\f\j\r\4\i\i ]] 00:06:50.999 00:06:50.999 real 0m1.474s 00:06:50.999 user 0m1.043s 00:06:50.999 sys 0m0.611s 00:06:50.999 12:31:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.999 12:31:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:50.999 12:31:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:06:50.999 12:31:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:50.999 12:31:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:50.999 12:31:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:50.999 12:31:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:50.999 12:31:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:50.999 12:31:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:50.999 12:31:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:50.999 12:31:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:50.999 12:31:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:50.999 12:31:16 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:50.999 12:31:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:50.999 [2024-07-12 12:31:17.015485] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:50.999 [2024-07-12 12:31:17.015766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63161 ] 00:06:50.999 { 00:06:50.999 "subsystems": [ 00:06:50.999 { 00:06:50.999 "subsystem": "bdev", 00:06:50.999 "config": [ 00:06:50.999 { 00:06:50.999 "params": { 00:06:50.999 "trtype": "pcie", 00:06:50.999 "traddr": "0000:00:10.0", 00:06:50.999 "name": "Nvme0" 00:06:50.999 }, 00:06:50.999 "method": "bdev_nvme_attach_controller" 00:06:50.999 }, 00:06:50.999 { 00:06:50.999 "method": "bdev_wait_for_examine" 00:06:50.999 } 00:06:50.999 ] 00:06:50.999 } 00:06:50.999 ] 00:06:50.999 } 00:06:51.258 [2024-07-12 12:31:17.148452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.258 [2024-07-12 12:31:17.274278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.258 [2024-07-12 12:31:17.329891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:51.776  Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:51.776 00:06:51.776 12:31:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.776 ************************************ 00:06:51.776 END TEST spdk_dd_basic_rw 00:06:51.776 ************************************ 00:06:51.776 00:06:51.776 real 0m19.662s 00:06:51.776 user 0m14.393s 00:06:51.776 sys 0m6.842s 00:06:51.776 12:31:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.776 12:31:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:51.776 12:31:17 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:06:51.776 12:31:17 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:51.776 12:31:17 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.776 12:31:17 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.776 12:31:17 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:51.776 ************************************ 00:06:51.776 START TEST spdk_dd_posix 00:06:51.776 ************************************ 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:51.776 * Looking for test storage... 00:06:51.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:51.776 * First test run, liburing in use 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:51.776 ************************************ 00:06:51.776 START TEST dd_flag_append 00:06:51.776 ************************************ 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=fqtmpjsuk9jyq6zn4j0vgjbcpylx355j 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=793ffg6x3i50lg8mshf8hc01g2zaodxf 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s fqtmpjsuk9jyq6zn4j0vgjbcpylx355j 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 793ffg6x3i50lg8mshf8hc01g2zaodxf 00:06:51.776 12:31:17 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:52.035 [2024-07-12 12:31:17.877927] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:52.035 [2024-07-12 12:31:17.878061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63225 ] 00:06:52.035 [2024-07-12 12:31:18.013006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.291 [2024-07-12 12:31:18.132509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.291 [2024-07-12 12:31:18.186032] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:52.548  Copying: 32/32 [B] (average 31 kBps) 00:06:52.548 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 793ffg6x3i50lg8mshf8hc01g2zaodxffqtmpjsuk9jyq6zn4j0vgjbcpylx355j == \7\9\3\f\f\g\6\x\3\i\5\0\l\g\8\m\s\h\f\8\h\c\0\1\g\2\z\a\o\d\x\f\f\q\t\m\p\j\s\u\k\9\j\y\q\6\z\n\4\j\0\v\g\j\b\c\p\y\l\x\3\5\5\j ]] 00:06:52.548 00:06:52.548 real 0m0.628s 00:06:52.548 user 0m0.379s 00:06:52.548 sys 0m0.269s 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.548 ************************************ 00:06:52.548 END TEST dd_flag_append 00:06:52.548 ************************************ 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:52.548 ************************************ 00:06:52.548 START TEST dd_flag_directory 00:06:52.548 ************************************ 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:52.548 12:31:18 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:52.548 [2024-07-12 12:31:18.546232] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:52.548 [2024-07-12 12:31:18.546333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63259 ] 00:06:52.805 [2024-07-12 12:31:18.680356] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.805 [2024-07-12 12:31:18.799833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.805 [2024-07-12 12:31:18.853090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.087 [2024-07-12 12:31:18.889902] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:53.087 [2024-07-12 12:31:18.889989] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:53.087 [2024-07-12 12:31:18.890010] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.087 [2024-07-12 12:31:19.007822] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:53.087 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:53.349 [2024-07-12 12:31:19.177025] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:53.349 [2024-07-12 12:31:19.177145] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63263 ] 00:06:53.349 [2024-07-12 12:31:19.317479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.606 [2024-07-12 12:31:19.446020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.606 [2024-07-12 12:31:19.503688] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:53.606 [2024-07-12 12:31:19.540318] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:53.606 [2024-07-12 12:31:19.540389] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:53.606 [2024-07-12 12:31:19.540428] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:53.606 [2024-07-12 12:31:19.655435] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:53.863 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:06:53.863 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:53.863 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:06:53.863 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:06:53.863 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:06:53.863 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:53.863 00:06:53.863 real 0m1.266s 00:06:53.863 user 0m0.757s 00:06:53.863 sys 0m0.296s 00:06:53.863 ************************************ 00:06:53.863 END TEST dd_flag_directory 00:06:53.863 ************************************ 00:06:53.863 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.863 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:53.863 12:31:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:53.863 12:31:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:53.863 12:31:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.863 12:31:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.863 12:31:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:53.863 ************************************ 00:06:53.863 START TEST dd_flag_nofollow 00:06:53.863 ************************************ 00:06:53.863 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:06:53.864 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:53.864 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:53.864 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:53.864 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:53.864 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.864 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:53.864 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.864 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.864 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.864 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.864 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.864 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.864 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:53.864 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:53.864 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:53.864 12:31:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:53.864 [2024-07-12 12:31:19.865142] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:53.864 [2024-07-12 12:31:19.865246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63297 ] 00:06:54.121 [2024-07-12 12:31:20.001538] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.121 [2024-07-12 12:31:20.122306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.121 [2024-07-12 12:31:20.176538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.379 [2024-07-12 12:31:20.211686] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:54.379 [2024-07-12 12:31:20.211756] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:54.379 [2024-07-12 12:31:20.211773] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.379 [2024-07-12 12:31:20.330107] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:54.379 12:31:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:54.638 [2024-07-12 12:31:20.495151] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:54.638 [2024-07-12 12:31:20.495289] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63312 ] 00:06:54.638 [2024-07-12 12:31:20.634053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.896 [2024-07-12 12:31:20.755095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.896 [2024-07-12 12:31:20.809349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:54.896 [2024-07-12 12:31:20.843636] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:54.896 [2024-07-12 12:31:20.843708] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:54.896 [2024-07-12 12:31:20.843726] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.896 [2024-07-12 12:31:20.956185] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:55.155 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:06:55.155 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.155 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:06:55.155 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:06:55.155 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:06:55.155 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.155 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:55.155 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:55.155 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:55.155 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:55.155 [2024-07-12 12:31:21.124285] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:55.155 [2024-07-12 12:31:21.124423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63314 ] 00:06:55.413 [2024-07-12 12:31:21.264148] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.413 [2024-07-12 12:31:21.385087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.413 [2024-07-12 12:31:21.439550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:55.670  Copying: 512/512 [B] (average 500 kBps) 00:06:55.670 00:06:55.670 ************************************ 00:06:55.670 END TEST dd_flag_nofollow 00:06:55.670 ************************************ 00:06:55.670 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ qzhd3mbwaa5f33s1ogaxap59xfjzuuiqjn0tmcn4dg5mo6yslc6fqy6l9jb1jdw0x5igs22cxvc1h17wvstq8747mofp2ou51lbk0em6chq9m8vzfd72tfkjui42eoa0dhrlj8cslqnn15io79dng3uc43dfw7prz4pwan9o39wsiv5ojwtzgdcz5agjfnw5ht1hhovqpl71zcrmtij8h3rkdi1ue391ol7dkgbsoy4hm69ey6wctnksotouggofp132gb5ivold8fj7ufh93lezws6jh8vfmwvahp81vvj8al3s5e4hbjrnb8kmcseftrallce3hk0kk1rmtca7gezy06e23y7kor2n2bc6xi7dee5u3c3jc98zjwq3zwz3ttldslipr7mtqjvmbjngg07qxgs02eo447nkzmh49yacupso4ma50g6w4wyxtq5lku5hgbhsb8pppbc62yu84zzqtni25gzg2hflsun441sg4rvaj7jsucdyumcq5yz5 == \q\z\h\d\3\m\b\w\a\a\5\f\3\3\s\1\o\g\a\x\a\p\5\9\x\f\j\z\u\u\i\q\j\n\0\t\m\c\n\4\d\g\5\m\o\6\y\s\l\c\6\f\q\y\6\l\9\j\b\1\j\d\w\0\x\5\i\g\s\2\2\c\x\v\c\1\h\1\7\w\v\s\t\q\8\7\4\7\m\o\f\p\2\o\u\5\1\l\b\k\0\e\m\6\c\h\q\9\m\8\v\z\f\d\7\2\t\f\k\j\u\i\4\2\e\o\a\0\d\h\r\l\j\8\c\s\l\q\n\n\1\5\i\o\7\9\d\n\g\3\u\c\4\3\d\f\w\7\p\r\z\4\p\w\a\n\9\o\3\9\w\s\i\v\5\o\j\w\t\z\g\d\c\z\5\a\g\j\f\n\w\5\h\t\1\h\h\o\v\q\p\l\7\1\z\c\r\m\t\i\j\8\h\3\r\k\d\i\1\u\e\3\9\1\o\l\7\d\k\g\b\s\o\y\4\h\m\6\9\e\y\6\w\c\t\n\k\s\o\t\o\u\g\g\o\f\p\1\3\2\g\b\5\i\v\o\l\d\8\f\j\7\u\f\h\9\3\l\e\z\w\s\6\j\h\8\v\f\m\w\v\a\h\p\8\1\v\v\j\8\a\l\3\s\5\e\4\h\b\j\r\n\b\8\k\m\c\s\e\f\t\r\a\l\l\c\e\3\h\k\0\k\k\1\r\m\t\c\a\7\g\e\z\y\0\6\e\2\3\y\7\k\o\r\2\n\2\b\c\6\x\i\7\d\e\e\5\u\3\c\3\j\c\9\8\z\j\w\q\3\z\w\z\3\t\t\l\d\s\l\i\p\r\7\m\t\q\j\v\m\b\j\n\g\g\0\7\q\x\g\s\0\2\e\o\4\4\7\n\k\z\m\h\4\9\y\a\c\u\p\s\o\4\m\a\5\0\g\6\w\4\w\y\x\t\q\5\l\k\u\5\h\g\b\h\s\b\8\p\p\p\b\c\6\2\y\u\8\4\z\z\q\t\n\i\2\5\g\z\g\2\h\f\l\s\u\n\4\4\1\s\g\4\r\v\a\j\7\j\s\u\c\d\y\u\m\c\q\5\y\z\5 ]] 00:06:55.670 00:06:55.670 real 0m1.888s 00:06:55.670 user 0m1.109s 00:06:55.670 sys 0m0.588s 00:06:55.670 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.670 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:55.670 12:31:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:55.670 12:31:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:55.670 12:31:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.670 12:31:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.670 12:31:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:55.928 ************************************ 00:06:55.928 START TEST dd_flag_noatime 00:06:55.928 ************************************ 00:06:55.928 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:06:55.928 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:55.928 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:55.928 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:55.928 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:55.928 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:55.928 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:55.928 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1720787481 00:06:55.928 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:55.928 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1720787481 00:06:55.928 12:31:21 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:56.873 12:31:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:56.873 [2024-07-12 12:31:22.825704] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:56.873 [2024-07-12 12:31:22.826200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63362 ] 00:06:57.154 [2024-07-12 12:31:22.969034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.154 [2024-07-12 12:31:23.090310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.154 [2024-07-12 12:31:23.149391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:57.412  Copying: 512/512 [B] (average 500 kBps) 00:06:57.412 00:06:57.412 12:31:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:57.412 12:31:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1720787481 )) 00:06:57.412 12:31:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.412 12:31:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1720787481 )) 00:06:57.412 12:31:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:57.412 [2024-07-12 12:31:23.473668] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:57.412 [2024-07-12 12:31:23.473767] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63370 ] 00:06:57.670 [2024-07-12 12:31:23.612636] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.670 [2024-07-12 12:31:23.732961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.928 [2024-07-12 12:31:23.790098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.187  Copying: 512/512 [B] (average 500 kBps) 00:06:58.187 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:58.187 ************************************ 00:06:58.187 END TEST dd_flag_noatime 00:06:58.187 ************************************ 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1720787483 )) 00:06:58.187 00:06:58.187 real 0m2.335s 00:06:58.187 user 0m0.785s 00:06:58.187 sys 0m0.589s 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:58.187 ************************************ 00:06:58.187 START TEST dd_flags_misc 00:06:58.187 ************************************ 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:58.187 12:31:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:58.187 [2024-07-12 12:31:24.193584] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:58.187 [2024-07-12 12:31:24.193712] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63404 ] 00:06:58.445 [2024-07-12 12:31:24.327025] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.445 [2024-07-12 12:31:24.443159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.445 [2024-07-12 12:31:24.498043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:58.703  Copying: 512/512 [B] (average 500 kBps) 00:06:58.703 00:06:58.703 12:31:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ktngjg4fuo6y6rzk8x79hn82of8ewlm9vxwossj7e248gwxhgb5g3lb2b1gaze0499usk6c5rdq8ef25q8zg052jvkn344w4wei2vti2uu2ea0wyl1wp7l7nb9l01srmx2mmcjd8eyjqj0099djfhvrvzxds7z96e1mdwaq51bkqu7hc8slyefhha5957okbeacbt6htnh7w9vwpvrpru2ywnf6kkb1y8rqkgazx6fn47rf64vigbtdov3oqfrcs2i34dn62390qvv3x1mzd1lqjh30737pwd6c0gdttvd5xdfsylfwc1fl9sm4z2t2tix36sppasahgzw8e07dhfnuh5hefb94h53688ni6jaodb1dlh8g2u516hflrcje88a3rfmozt4zsbeh1i04pux5ir9j6aiq14wav9i4pl3qcn6s8sl33b2u5rcd4nu3tr2onqg8928yno040go33bjvic9pe2cavxgpvml01uduv5abwqw35w0glccek67py == \k\t\n\g\j\g\4\f\u\o\6\y\6\r\z\k\8\x\7\9\h\n\8\2\o\f\8\e\w\l\m\9\v\x\w\o\s\s\j\7\e\2\4\8\g\w\x\h\g\b\5\g\3\l\b\2\b\1\g\a\z\e\0\4\9\9\u\s\k\6\c\5\r\d\q\8\e\f\2\5\q\8\z\g\0\5\2\j\v\k\n\3\4\4\w\4\w\e\i\2\v\t\i\2\u\u\2\e\a\0\w\y\l\1\w\p\7\l\7\n\b\9\l\0\1\s\r\m\x\2\m\m\c\j\d\8\e\y\j\q\j\0\0\9\9\d\j\f\h\v\r\v\z\x\d\s\7\z\9\6\e\1\m\d\w\a\q\5\1\b\k\q\u\7\h\c\8\s\l\y\e\f\h\h\a\5\9\5\7\o\k\b\e\a\c\b\t\6\h\t\n\h\7\w\9\v\w\p\v\r\p\r\u\2\y\w\n\f\6\k\k\b\1\y\8\r\q\k\g\a\z\x\6\f\n\4\7\r\f\6\4\v\i\g\b\t\d\o\v\3\o\q\f\r\c\s\2\i\3\4\d\n\6\2\3\9\0\q\v\v\3\x\1\m\z\d\1\l\q\j\h\3\0\7\3\7\p\w\d\6\c\0\g\d\t\t\v\d\5\x\d\f\s\y\l\f\w\c\1\f\l\9\s\m\4\z\2\t\2\t\i\x\3\6\s\p\p\a\s\a\h\g\z\w\8\e\0\7\d\h\f\n\u\h\5\h\e\f\b\9\4\h\5\3\6\8\8\n\i\6\j\a\o\d\b\1\d\l\h\8\g\2\u\5\1\6\h\f\l\r\c\j\e\8\8\a\3\r\f\m\o\z\t\4\z\s\b\e\h\1\i\0\4\p\u\x\5\i\r\9\j\6\a\i\q\1\4\w\a\v\9\i\4\p\l\3\q\c\n\6\s\8\s\l\3\3\b\2\u\5\r\c\d\4\n\u\3\t\r\2\o\n\q\g\8\9\2\8\y\n\o\0\4\0\g\o\3\3\b\j\v\i\c\9\p\e\2\c\a\v\x\g\p\v\m\l\0\1\u\d\u\v\5\a\b\w\q\w\3\5\w\0\g\l\c\c\e\k\6\7\p\y ]] 00:06:58.703 12:31:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:58.703 12:31:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:58.961 [2024-07-12 12:31:24.818649] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:58.961 [2024-07-12 12:31:24.818763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63419 ] 00:06:58.961 [2024-07-12 12:31:24.957872] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.219 [2024-07-12 12:31:25.074976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.219 [2024-07-12 12:31:25.129593] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.477  Copying: 512/512 [B] (average 500 kBps) 00:06:59.477 00:06:59.477 12:31:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ktngjg4fuo6y6rzk8x79hn82of8ewlm9vxwossj7e248gwxhgb5g3lb2b1gaze0499usk6c5rdq8ef25q8zg052jvkn344w4wei2vti2uu2ea0wyl1wp7l7nb9l01srmx2mmcjd8eyjqj0099djfhvrvzxds7z96e1mdwaq51bkqu7hc8slyefhha5957okbeacbt6htnh7w9vwpvrpru2ywnf6kkb1y8rqkgazx6fn47rf64vigbtdov3oqfrcs2i34dn62390qvv3x1mzd1lqjh30737pwd6c0gdttvd5xdfsylfwc1fl9sm4z2t2tix36sppasahgzw8e07dhfnuh5hefb94h53688ni6jaodb1dlh8g2u516hflrcje88a3rfmozt4zsbeh1i04pux5ir9j6aiq14wav9i4pl3qcn6s8sl33b2u5rcd4nu3tr2onqg8928yno040go33bjvic9pe2cavxgpvml01uduv5abwqw35w0glccek67py == \k\t\n\g\j\g\4\f\u\o\6\y\6\r\z\k\8\x\7\9\h\n\8\2\o\f\8\e\w\l\m\9\v\x\w\o\s\s\j\7\e\2\4\8\g\w\x\h\g\b\5\g\3\l\b\2\b\1\g\a\z\e\0\4\9\9\u\s\k\6\c\5\r\d\q\8\e\f\2\5\q\8\z\g\0\5\2\j\v\k\n\3\4\4\w\4\w\e\i\2\v\t\i\2\u\u\2\e\a\0\w\y\l\1\w\p\7\l\7\n\b\9\l\0\1\s\r\m\x\2\m\m\c\j\d\8\e\y\j\q\j\0\0\9\9\d\j\f\h\v\r\v\z\x\d\s\7\z\9\6\e\1\m\d\w\a\q\5\1\b\k\q\u\7\h\c\8\s\l\y\e\f\h\h\a\5\9\5\7\o\k\b\e\a\c\b\t\6\h\t\n\h\7\w\9\v\w\p\v\r\p\r\u\2\y\w\n\f\6\k\k\b\1\y\8\r\q\k\g\a\z\x\6\f\n\4\7\r\f\6\4\v\i\g\b\t\d\o\v\3\o\q\f\r\c\s\2\i\3\4\d\n\6\2\3\9\0\q\v\v\3\x\1\m\z\d\1\l\q\j\h\3\0\7\3\7\p\w\d\6\c\0\g\d\t\t\v\d\5\x\d\f\s\y\l\f\w\c\1\f\l\9\s\m\4\z\2\t\2\t\i\x\3\6\s\p\p\a\s\a\h\g\z\w\8\e\0\7\d\h\f\n\u\h\5\h\e\f\b\9\4\h\5\3\6\8\8\n\i\6\j\a\o\d\b\1\d\l\h\8\g\2\u\5\1\6\h\f\l\r\c\j\e\8\8\a\3\r\f\m\o\z\t\4\z\s\b\e\h\1\i\0\4\p\u\x\5\i\r\9\j\6\a\i\q\1\4\w\a\v\9\i\4\p\l\3\q\c\n\6\s\8\s\l\3\3\b\2\u\5\r\c\d\4\n\u\3\t\r\2\o\n\q\g\8\9\2\8\y\n\o\0\4\0\g\o\3\3\b\j\v\i\c\9\p\e\2\c\a\v\x\g\p\v\m\l\0\1\u\d\u\v\5\a\b\w\q\w\3\5\w\0\g\l\c\c\e\k\6\7\p\y ]] 00:06:59.477 12:31:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:59.478 12:31:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:59.478 [2024-07-12 12:31:25.434303] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:59.478 [2024-07-12 12:31:25.434477] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63423 ] 00:06:59.737 [2024-07-12 12:31:25.572736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.737 [2024-07-12 12:31:25.680394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.737 [2024-07-12 12:31:25.739170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:06:59.995  Copying: 512/512 [B] (average 125 kBps) 00:06:59.995 00:06:59.995 12:31:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ktngjg4fuo6y6rzk8x79hn82of8ewlm9vxwossj7e248gwxhgb5g3lb2b1gaze0499usk6c5rdq8ef25q8zg052jvkn344w4wei2vti2uu2ea0wyl1wp7l7nb9l01srmx2mmcjd8eyjqj0099djfhvrvzxds7z96e1mdwaq51bkqu7hc8slyefhha5957okbeacbt6htnh7w9vwpvrpru2ywnf6kkb1y8rqkgazx6fn47rf64vigbtdov3oqfrcs2i34dn62390qvv3x1mzd1lqjh30737pwd6c0gdttvd5xdfsylfwc1fl9sm4z2t2tix36sppasahgzw8e07dhfnuh5hefb94h53688ni6jaodb1dlh8g2u516hflrcje88a3rfmozt4zsbeh1i04pux5ir9j6aiq14wav9i4pl3qcn6s8sl33b2u5rcd4nu3tr2onqg8928yno040go33bjvic9pe2cavxgpvml01uduv5abwqw35w0glccek67py == \k\t\n\g\j\g\4\f\u\o\6\y\6\r\z\k\8\x\7\9\h\n\8\2\o\f\8\e\w\l\m\9\v\x\w\o\s\s\j\7\e\2\4\8\g\w\x\h\g\b\5\g\3\l\b\2\b\1\g\a\z\e\0\4\9\9\u\s\k\6\c\5\r\d\q\8\e\f\2\5\q\8\z\g\0\5\2\j\v\k\n\3\4\4\w\4\w\e\i\2\v\t\i\2\u\u\2\e\a\0\w\y\l\1\w\p\7\l\7\n\b\9\l\0\1\s\r\m\x\2\m\m\c\j\d\8\e\y\j\q\j\0\0\9\9\d\j\f\h\v\r\v\z\x\d\s\7\z\9\6\e\1\m\d\w\a\q\5\1\b\k\q\u\7\h\c\8\s\l\y\e\f\h\h\a\5\9\5\7\o\k\b\e\a\c\b\t\6\h\t\n\h\7\w\9\v\w\p\v\r\p\r\u\2\y\w\n\f\6\k\k\b\1\y\8\r\q\k\g\a\z\x\6\f\n\4\7\r\f\6\4\v\i\g\b\t\d\o\v\3\o\q\f\r\c\s\2\i\3\4\d\n\6\2\3\9\0\q\v\v\3\x\1\m\z\d\1\l\q\j\h\3\0\7\3\7\p\w\d\6\c\0\g\d\t\t\v\d\5\x\d\f\s\y\l\f\w\c\1\f\l\9\s\m\4\z\2\t\2\t\i\x\3\6\s\p\p\a\s\a\h\g\z\w\8\e\0\7\d\h\f\n\u\h\5\h\e\f\b\9\4\h\5\3\6\8\8\n\i\6\j\a\o\d\b\1\d\l\h\8\g\2\u\5\1\6\h\f\l\r\c\j\e\8\8\a\3\r\f\m\o\z\t\4\z\s\b\e\h\1\i\0\4\p\u\x\5\i\r\9\j\6\a\i\q\1\4\w\a\v\9\i\4\p\l\3\q\c\n\6\s\8\s\l\3\3\b\2\u\5\r\c\d\4\n\u\3\t\r\2\o\n\q\g\8\9\2\8\y\n\o\0\4\0\g\o\3\3\b\j\v\i\c\9\p\e\2\c\a\v\x\g\p\v\m\l\0\1\u\d\u\v\5\a\b\w\q\w\3\5\w\0\g\l\c\c\e\k\6\7\p\y ]] 00:06:59.995 12:31:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:59.995 12:31:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:59.995 [2024-07-12 12:31:26.058559] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:06:59.995 [2024-07-12 12:31:26.058696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63438 ] 00:07:00.253 [2024-07-12 12:31:26.195340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.253 [2024-07-12 12:31:26.313646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.510 [2024-07-12 12:31:26.369346] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:00.768  Copying: 512/512 [B] (average 250 kBps) 00:07:00.768 00:07:00.768 12:31:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ ktngjg4fuo6y6rzk8x79hn82of8ewlm9vxwossj7e248gwxhgb5g3lb2b1gaze0499usk6c5rdq8ef25q8zg052jvkn344w4wei2vti2uu2ea0wyl1wp7l7nb9l01srmx2mmcjd8eyjqj0099djfhvrvzxds7z96e1mdwaq51bkqu7hc8slyefhha5957okbeacbt6htnh7w9vwpvrpru2ywnf6kkb1y8rqkgazx6fn47rf64vigbtdov3oqfrcs2i34dn62390qvv3x1mzd1lqjh30737pwd6c0gdttvd5xdfsylfwc1fl9sm4z2t2tix36sppasahgzw8e07dhfnuh5hefb94h53688ni6jaodb1dlh8g2u516hflrcje88a3rfmozt4zsbeh1i04pux5ir9j6aiq14wav9i4pl3qcn6s8sl33b2u5rcd4nu3tr2onqg8928yno040go33bjvic9pe2cavxgpvml01uduv5abwqw35w0glccek67py == \k\t\n\g\j\g\4\f\u\o\6\y\6\r\z\k\8\x\7\9\h\n\8\2\o\f\8\e\w\l\m\9\v\x\w\o\s\s\j\7\e\2\4\8\g\w\x\h\g\b\5\g\3\l\b\2\b\1\g\a\z\e\0\4\9\9\u\s\k\6\c\5\r\d\q\8\e\f\2\5\q\8\z\g\0\5\2\j\v\k\n\3\4\4\w\4\w\e\i\2\v\t\i\2\u\u\2\e\a\0\w\y\l\1\w\p\7\l\7\n\b\9\l\0\1\s\r\m\x\2\m\m\c\j\d\8\e\y\j\q\j\0\0\9\9\d\j\f\h\v\r\v\z\x\d\s\7\z\9\6\e\1\m\d\w\a\q\5\1\b\k\q\u\7\h\c\8\s\l\y\e\f\h\h\a\5\9\5\7\o\k\b\e\a\c\b\t\6\h\t\n\h\7\w\9\v\w\p\v\r\p\r\u\2\y\w\n\f\6\k\k\b\1\y\8\r\q\k\g\a\z\x\6\f\n\4\7\r\f\6\4\v\i\g\b\t\d\o\v\3\o\q\f\r\c\s\2\i\3\4\d\n\6\2\3\9\0\q\v\v\3\x\1\m\z\d\1\l\q\j\h\3\0\7\3\7\p\w\d\6\c\0\g\d\t\t\v\d\5\x\d\f\s\y\l\f\w\c\1\f\l\9\s\m\4\z\2\t\2\t\i\x\3\6\s\p\p\a\s\a\h\g\z\w\8\e\0\7\d\h\f\n\u\h\5\h\e\f\b\9\4\h\5\3\6\8\8\n\i\6\j\a\o\d\b\1\d\l\h\8\g\2\u\5\1\6\h\f\l\r\c\j\e\8\8\a\3\r\f\m\o\z\t\4\z\s\b\e\h\1\i\0\4\p\u\x\5\i\r\9\j\6\a\i\q\1\4\w\a\v\9\i\4\p\l\3\q\c\n\6\s\8\s\l\3\3\b\2\u\5\r\c\d\4\n\u\3\t\r\2\o\n\q\g\8\9\2\8\y\n\o\0\4\0\g\o\3\3\b\j\v\i\c\9\p\e\2\c\a\v\x\g\p\v\m\l\0\1\u\d\u\v\5\a\b\w\q\w\3\5\w\0\g\l\c\c\e\k\6\7\p\y ]] 00:07:00.768 12:31:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:00.768 12:31:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:00.768 12:31:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:00.768 12:31:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:00.768 12:31:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:00.768 12:31:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:00.768 [2024-07-12 12:31:26.683334] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:00.768 [2024-07-12 12:31:26.683620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63448 ] 00:07:00.768 [2024-07-12 12:31:26.820991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.025 [2024-07-12 12:31:26.933717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.025 [2024-07-12 12:31:26.991102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:01.283  Copying: 512/512 [B] (average 500 kBps) 00:07:01.283 00:07:01.283 12:31:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 24l69n7pxmcconbcrridmxdouiofi7rb0tpir4xbkz5oqst34omm6t4pszwnn2i7bxvkg6qe89sa5ko6y889b7nrf1deljpxvftkf1arv8idd9ri9dnys9ochqw6wsaj7j9jt8l9zvyet1xhz1bge4u5unnwz27a0vcqjq28tdypsl74njfp8gbu995urwg3dolpc54orbwvns2irppdgsyzs7lc7nhcw60v8xvotirdz5rpidzurerk9d2d5tq7840aastqyopleltfiwppotnyjjp30edegws0vsy8jwyrkwa44gzi9jv2fk4lnpseghr8apcys1w003hi80vxpeggiqnon50lsa4h1ajmvzd6cepek9vyckah2lw4k4ykmyzq1st0kdqtldp9nka83fn5zadz9kqc0dm155lf1tmhw8vvciuakw2lautqdxbhbrsy2m3mievrnjdc42w4gkxyim9yoz1lsv9xp21yj11qxxgltut71q3tse2r42rf == \2\4\l\6\9\n\7\p\x\m\c\c\o\n\b\c\r\r\i\d\m\x\d\o\u\i\o\f\i\7\r\b\0\t\p\i\r\4\x\b\k\z\5\o\q\s\t\3\4\o\m\m\6\t\4\p\s\z\w\n\n\2\i\7\b\x\v\k\g\6\q\e\8\9\s\a\5\k\o\6\y\8\8\9\b\7\n\r\f\1\d\e\l\j\p\x\v\f\t\k\f\1\a\r\v\8\i\d\d\9\r\i\9\d\n\y\s\9\o\c\h\q\w\6\w\s\a\j\7\j\9\j\t\8\l\9\z\v\y\e\t\1\x\h\z\1\b\g\e\4\u\5\u\n\n\w\z\2\7\a\0\v\c\q\j\q\2\8\t\d\y\p\s\l\7\4\n\j\f\p\8\g\b\u\9\9\5\u\r\w\g\3\d\o\l\p\c\5\4\o\r\b\w\v\n\s\2\i\r\p\p\d\g\s\y\z\s\7\l\c\7\n\h\c\w\6\0\v\8\x\v\o\t\i\r\d\z\5\r\p\i\d\z\u\r\e\r\k\9\d\2\d\5\t\q\7\8\4\0\a\a\s\t\q\y\o\p\l\e\l\t\f\i\w\p\p\o\t\n\y\j\j\p\3\0\e\d\e\g\w\s\0\v\s\y\8\j\w\y\r\k\w\a\4\4\g\z\i\9\j\v\2\f\k\4\l\n\p\s\e\g\h\r\8\a\p\c\y\s\1\w\0\0\3\h\i\8\0\v\x\p\e\g\g\i\q\n\o\n\5\0\l\s\a\4\h\1\a\j\m\v\z\d\6\c\e\p\e\k\9\v\y\c\k\a\h\2\l\w\4\k\4\y\k\m\y\z\q\1\s\t\0\k\d\q\t\l\d\p\9\n\k\a\8\3\f\n\5\z\a\d\z\9\k\q\c\0\d\m\1\5\5\l\f\1\t\m\h\w\8\v\v\c\i\u\a\k\w\2\l\a\u\t\q\d\x\b\h\b\r\s\y\2\m\3\m\i\e\v\r\n\j\d\c\4\2\w\4\g\k\x\y\i\m\9\y\o\z\1\l\s\v\9\x\p\2\1\y\j\1\1\q\x\x\g\l\t\u\t\7\1\q\3\t\s\e\2\r\4\2\r\f ]] 00:07:01.283 12:31:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:01.283 12:31:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:01.283 [2024-07-12 12:31:27.315999] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:01.283 [2024-07-12 12:31:27.316114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63457 ] 00:07:01.541 [2024-07-12 12:31:27.455917] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.541 [2024-07-12 12:31:27.568611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.799 [2024-07-12 12:31:27.626015] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.058  Copying: 512/512 [B] (average 500 kBps) 00:07:02.058 00:07:02.058 12:31:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 24l69n7pxmcconbcrridmxdouiofi7rb0tpir4xbkz5oqst34omm6t4pszwnn2i7bxvkg6qe89sa5ko6y889b7nrf1deljpxvftkf1arv8idd9ri9dnys9ochqw6wsaj7j9jt8l9zvyet1xhz1bge4u5unnwz27a0vcqjq28tdypsl74njfp8gbu995urwg3dolpc54orbwvns2irppdgsyzs7lc7nhcw60v8xvotirdz5rpidzurerk9d2d5tq7840aastqyopleltfiwppotnyjjp30edegws0vsy8jwyrkwa44gzi9jv2fk4lnpseghr8apcys1w003hi80vxpeggiqnon50lsa4h1ajmvzd6cepek9vyckah2lw4k4ykmyzq1st0kdqtldp9nka83fn5zadz9kqc0dm155lf1tmhw8vvciuakw2lautqdxbhbrsy2m3mievrnjdc42w4gkxyim9yoz1lsv9xp21yj11qxxgltut71q3tse2r42rf == \2\4\l\6\9\n\7\p\x\m\c\c\o\n\b\c\r\r\i\d\m\x\d\o\u\i\o\f\i\7\r\b\0\t\p\i\r\4\x\b\k\z\5\o\q\s\t\3\4\o\m\m\6\t\4\p\s\z\w\n\n\2\i\7\b\x\v\k\g\6\q\e\8\9\s\a\5\k\o\6\y\8\8\9\b\7\n\r\f\1\d\e\l\j\p\x\v\f\t\k\f\1\a\r\v\8\i\d\d\9\r\i\9\d\n\y\s\9\o\c\h\q\w\6\w\s\a\j\7\j\9\j\t\8\l\9\z\v\y\e\t\1\x\h\z\1\b\g\e\4\u\5\u\n\n\w\z\2\7\a\0\v\c\q\j\q\2\8\t\d\y\p\s\l\7\4\n\j\f\p\8\g\b\u\9\9\5\u\r\w\g\3\d\o\l\p\c\5\4\o\r\b\w\v\n\s\2\i\r\p\p\d\g\s\y\z\s\7\l\c\7\n\h\c\w\6\0\v\8\x\v\o\t\i\r\d\z\5\r\p\i\d\z\u\r\e\r\k\9\d\2\d\5\t\q\7\8\4\0\a\a\s\t\q\y\o\p\l\e\l\t\f\i\w\p\p\o\t\n\y\j\j\p\3\0\e\d\e\g\w\s\0\v\s\y\8\j\w\y\r\k\w\a\4\4\g\z\i\9\j\v\2\f\k\4\l\n\p\s\e\g\h\r\8\a\p\c\y\s\1\w\0\0\3\h\i\8\0\v\x\p\e\g\g\i\q\n\o\n\5\0\l\s\a\4\h\1\a\j\m\v\z\d\6\c\e\p\e\k\9\v\y\c\k\a\h\2\l\w\4\k\4\y\k\m\y\z\q\1\s\t\0\k\d\q\t\l\d\p\9\n\k\a\8\3\f\n\5\z\a\d\z\9\k\q\c\0\d\m\1\5\5\l\f\1\t\m\h\w\8\v\v\c\i\u\a\k\w\2\l\a\u\t\q\d\x\b\h\b\r\s\y\2\m\3\m\i\e\v\r\n\j\d\c\4\2\w\4\g\k\x\y\i\m\9\y\o\z\1\l\s\v\9\x\p\2\1\y\j\1\1\q\x\x\g\l\t\u\t\7\1\q\3\t\s\e\2\r\4\2\r\f ]] 00:07:02.058 12:31:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:02.058 12:31:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:02.058 [2024-07-12 12:31:27.937501] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:02.058 [2024-07-12 12:31:27.937634] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63472 ] 00:07:02.058 [2024-07-12 12:31:28.076212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.317 [2024-07-12 12:31:28.189123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.317 [2024-07-12 12:31:28.251026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:02.575  Copying: 512/512 [B] (average 250 kBps) 00:07:02.575 00:07:02.576 12:31:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 24l69n7pxmcconbcrridmxdouiofi7rb0tpir4xbkz5oqst34omm6t4pszwnn2i7bxvkg6qe89sa5ko6y889b7nrf1deljpxvftkf1arv8idd9ri9dnys9ochqw6wsaj7j9jt8l9zvyet1xhz1bge4u5unnwz27a0vcqjq28tdypsl74njfp8gbu995urwg3dolpc54orbwvns2irppdgsyzs7lc7nhcw60v8xvotirdz5rpidzurerk9d2d5tq7840aastqyopleltfiwppotnyjjp30edegws0vsy8jwyrkwa44gzi9jv2fk4lnpseghr8apcys1w003hi80vxpeggiqnon50lsa4h1ajmvzd6cepek9vyckah2lw4k4ykmyzq1st0kdqtldp9nka83fn5zadz9kqc0dm155lf1tmhw8vvciuakw2lautqdxbhbrsy2m3mievrnjdc42w4gkxyim9yoz1lsv9xp21yj11qxxgltut71q3tse2r42rf == \2\4\l\6\9\n\7\p\x\m\c\c\o\n\b\c\r\r\i\d\m\x\d\o\u\i\o\f\i\7\r\b\0\t\p\i\r\4\x\b\k\z\5\o\q\s\t\3\4\o\m\m\6\t\4\p\s\z\w\n\n\2\i\7\b\x\v\k\g\6\q\e\8\9\s\a\5\k\o\6\y\8\8\9\b\7\n\r\f\1\d\e\l\j\p\x\v\f\t\k\f\1\a\r\v\8\i\d\d\9\r\i\9\d\n\y\s\9\o\c\h\q\w\6\w\s\a\j\7\j\9\j\t\8\l\9\z\v\y\e\t\1\x\h\z\1\b\g\e\4\u\5\u\n\n\w\z\2\7\a\0\v\c\q\j\q\2\8\t\d\y\p\s\l\7\4\n\j\f\p\8\g\b\u\9\9\5\u\r\w\g\3\d\o\l\p\c\5\4\o\r\b\w\v\n\s\2\i\r\p\p\d\g\s\y\z\s\7\l\c\7\n\h\c\w\6\0\v\8\x\v\o\t\i\r\d\z\5\r\p\i\d\z\u\r\e\r\k\9\d\2\d\5\t\q\7\8\4\0\a\a\s\t\q\y\o\p\l\e\l\t\f\i\w\p\p\o\t\n\y\j\j\p\3\0\e\d\e\g\w\s\0\v\s\y\8\j\w\y\r\k\w\a\4\4\g\z\i\9\j\v\2\f\k\4\l\n\p\s\e\g\h\r\8\a\p\c\y\s\1\w\0\0\3\h\i\8\0\v\x\p\e\g\g\i\q\n\o\n\5\0\l\s\a\4\h\1\a\j\m\v\z\d\6\c\e\p\e\k\9\v\y\c\k\a\h\2\l\w\4\k\4\y\k\m\y\z\q\1\s\t\0\k\d\q\t\l\d\p\9\n\k\a\8\3\f\n\5\z\a\d\z\9\k\q\c\0\d\m\1\5\5\l\f\1\t\m\h\w\8\v\v\c\i\u\a\k\w\2\l\a\u\t\q\d\x\b\h\b\r\s\y\2\m\3\m\i\e\v\r\n\j\d\c\4\2\w\4\g\k\x\y\i\m\9\y\o\z\1\l\s\v\9\x\p\2\1\y\j\1\1\q\x\x\g\l\t\u\t\7\1\q\3\t\s\e\2\r\4\2\r\f ]] 00:07:02.576 12:31:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:02.576 12:31:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:02.576 [2024-07-12 12:31:28.574450] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:02.576 [2024-07-12 12:31:28.574560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63476 ] 00:07:02.833 [2024-07-12 12:31:28.711546] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.833 [2024-07-12 12:31:28.831895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.833 [2024-07-12 12:31:28.887536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.091  Copying: 512/512 [B] (average 250 kBps) 00:07:03.091 00:07:03.091 12:31:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 24l69n7pxmcconbcrridmxdouiofi7rb0tpir4xbkz5oqst34omm6t4pszwnn2i7bxvkg6qe89sa5ko6y889b7nrf1deljpxvftkf1arv8idd9ri9dnys9ochqw6wsaj7j9jt8l9zvyet1xhz1bge4u5unnwz27a0vcqjq28tdypsl74njfp8gbu995urwg3dolpc54orbwvns2irppdgsyzs7lc7nhcw60v8xvotirdz5rpidzurerk9d2d5tq7840aastqyopleltfiwppotnyjjp30edegws0vsy8jwyrkwa44gzi9jv2fk4lnpseghr8apcys1w003hi80vxpeggiqnon50lsa4h1ajmvzd6cepek9vyckah2lw4k4ykmyzq1st0kdqtldp9nka83fn5zadz9kqc0dm155lf1tmhw8vvciuakw2lautqdxbhbrsy2m3mievrnjdc42w4gkxyim9yoz1lsv9xp21yj11qxxgltut71q3tse2r42rf == \2\4\l\6\9\n\7\p\x\m\c\c\o\n\b\c\r\r\i\d\m\x\d\o\u\i\o\f\i\7\r\b\0\t\p\i\r\4\x\b\k\z\5\o\q\s\t\3\4\o\m\m\6\t\4\p\s\z\w\n\n\2\i\7\b\x\v\k\g\6\q\e\8\9\s\a\5\k\o\6\y\8\8\9\b\7\n\r\f\1\d\e\l\j\p\x\v\f\t\k\f\1\a\r\v\8\i\d\d\9\r\i\9\d\n\y\s\9\o\c\h\q\w\6\w\s\a\j\7\j\9\j\t\8\l\9\z\v\y\e\t\1\x\h\z\1\b\g\e\4\u\5\u\n\n\w\z\2\7\a\0\v\c\q\j\q\2\8\t\d\y\p\s\l\7\4\n\j\f\p\8\g\b\u\9\9\5\u\r\w\g\3\d\o\l\p\c\5\4\o\r\b\w\v\n\s\2\i\r\p\p\d\g\s\y\z\s\7\l\c\7\n\h\c\w\6\0\v\8\x\v\o\t\i\r\d\z\5\r\p\i\d\z\u\r\e\r\k\9\d\2\d\5\t\q\7\8\4\0\a\a\s\t\q\y\o\p\l\e\l\t\f\i\w\p\p\o\t\n\y\j\j\p\3\0\e\d\e\g\w\s\0\v\s\y\8\j\w\y\r\k\w\a\4\4\g\z\i\9\j\v\2\f\k\4\l\n\p\s\e\g\h\r\8\a\p\c\y\s\1\w\0\0\3\h\i\8\0\v\x\p\e\g\g\i\q\n\o\n\5\0\l\s\a\4\h\1\a\j\m\v\z\d\6\c\e\p\e\k\9\v\y\c\k\a\h\2\l\w\4\k\4\y\k\m\y\z\q\1\s\t\0\k\d\q\t\l\d\p\9\n\k\a\8\3\f\n\5\z\a\d\z\9\k\q\c\0\d\m\1\5\5\l\f\1\t\m\h\w\8\v\v\c\i\u\a\k\w\2\l\a\u\t\q\d\x\b\h\b\r\s\y\2\m\3\m\i\e\v\r\n\j\d\c\4\2\w\4\g\k\x\y\i\m\9\y\o\z\1\l\s\v\9\x\p\2\1\y\j\1\1\q\x\x\g\l\t\u\t\7\1\q\3\t\s\e\2\r\4\2\r\f ]] 00:07:03.091 00:07:03.091 real 0m5.016s 00:07:03.091 user 0m2.910s 00:07:03.091 sys 0m2.313s 00:07:03.091 12:31:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.091 ************************************ 00:07:03.091 END TEST dd_flags_misc 00:07:03.091 ************************************ 00:07:03.091 12:31:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:03.349 * Second test run, disabling liburing, forcing AIO 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:03.349 ************************************ 00:07:03.349 START TEST dd_flag_append_forced_aio 00:07:03.349 ************************************ 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=1vf4nt6dujte5l7o6sc6d0bm08p1uec5 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=edq5it57j7l8p0kqo8uvuzdtqhgx5ecj 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 1vf4nt6dujte5l7o6sc6d0bm08p1uec5 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s edq5it57j7l8p0kqo8uvuzdtqhgx5ecj 00:07:03.349 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:03.349 [2024-07-12 12:31:29.274072] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:03.349 [2024-07-12 12:31:29.274213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63510 ] 00:07:03.349 [2024-07-12 12:31:29.413121] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.607 [2024-07-12 12:31:29.533317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.607 [2024-07-12 12:31:29.588551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:03.865  Copying: 32/32 [B] (average 31 kBps) 00:07:03.865 00:07:03.865 ************************************ 00:07:03.865 END TEST dd_flag_append_forced_aio 00:07:03.865 ************************************ 00:07:03.865 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ edq5it57j7l8p0kqo8uvuzdtqhgx5ecj1vf4nt6dujte5l7o6sc6d0bm08p1uec5 == \e\d\q\5\i\t\5\7\j\7\l\8\p\0\k\q\o\8\u\v\u\z\d\t\q\h\g\x\5\e\c\j\1\v\f\4\n\t\6\d\u\j\t\e\5\l\7\o\6\s\c\6\d\0\b\m\0\8\p\1\u\e\c\5 ]] 00:07:03.865 00:07:03.865 real 0m0.687s 00:07:03.865 user 0m0.412s 00:07:03.865 sys 0m0.154s 00:07:03.865 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.865 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:03.866 12:31:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:03.866 12:31:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:03.866 12:31:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.866 12:31:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.866 12:31:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:04.124 ************************************ 00:07:04.124 START TEST dd_flag_directory_forced_aio 00:07:04.124 ************************************ 00:07:04.124 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:07:04.124 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.124 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:04.124 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.124 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.124 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.124 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.124 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.124 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.124 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.124 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.124 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.124 12:31:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:04.124 [2024-07-12 12:31:29.996058] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:04.124 [2024-07-12 12:31:29.996152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63542 ] 00:07:04.124 [2024-07-12 12:31:30.132751] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.383 [2024-07-12 12:31:30.255871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.383 [2024-07-12 12:31:30.313875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.383 [2024-07-12 12:31:30.347308] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:04.383 [2024-07-12 12:31:30.347375] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:04.383 [2024-07-12 12:31:30.347391] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:04.642 [2024-07-12 12:31:30.467508] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:04.642 12:31:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:04.642 [2024-07-12 12:31:30.630595] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:04.642 [2024-07-12 12:31:30.630692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63546 ] 00:07:04.900 [2024-07-12 12:31:30.769200] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.900 [2024-07-12 12:31:30.877944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.900 [2024-07-12 12:31:30.933324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:04.900 [2024-07-12 12:31:30.968458] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:04.900 [2024-07-12 12:31:30.968507] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:04.900 [2024-07-12 12:31:30.968523] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.158 [2024-07-12 12:31:31.084296] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:05.158 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:07:05.158 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.158 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:07:05.158 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:05.158 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:05.158 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.158 00:07:05.158 real 0m1.244s 00:07:05.158 user 0m0.718s 00:07:05.158 sys 0m0.316s 00:07:05.158 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.158 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:05.158 ************************************ 00:07:05.158 END TEST dd_flag_directory_forced_aio 00:07:05.158 ************************************ 00:07:05.158 12:31:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:05.158 12:31:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:05.158 12:31:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.158 12:31:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.158 12:31:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:05.416 ************************************ 00:07:05.416 START TEST dd_flag_nofollow_forced_aio 00:07:05.416 ************************************ 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.416 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:05.416 [2024-07-12 12:31:31.300291] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:05.416 [2024-07-12 12:31:31.300418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63580 ] 00:07:05.416 [2024-07-12 12:31:31.438647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.675 [2024-07-12 12:31:31.556164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.675 [2024-07-12 12:31:31.614603] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:05.675 [2024-07-12 12:31:31.651261] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:05.675 [2024-07-12 12:31:31.651340] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:05.675 [2024-07-12 12:31:31.651357] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.933 [2024-07-12 12:31:31.771527] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:05.933 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:05.934 12:31:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:05.934 [2024-07-12 12:31:31.941021] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:05.934 [2024-07-12 12:31:31.941157] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63590 ] 00:07:06.191 [2024-07-12 12:31:32.083829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.191 [2024-07-12 12:31:32.215028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.449 [2024-07-12 12:31:32.272743] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:06.449 [2024-07-12 12:31:32.310194] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:06.449 [2024-07-12 12:31:32.310273] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:06.449 [2024-07-12 12:31:32.310290] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.449 [2024-07-12 12:31:32.428561] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:06.707 12:31:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:07:06.707 12:31:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:06.707 12:31:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:07:06.707 12:31:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:07:06.707 12:31:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:07:06.707 12:31:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:06.707 12:31:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:06.707 12:31:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:06.707 12:31:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:06.707 12:31:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:06.707 [2024-07-12 12:31:32.600390] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:06.707 [2024-07-12 12:31:32.600515] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63597 ] 00:07:06.707 [2024-07-12 12:31:32.740875] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.964 [2024-07-12 12:31:32.852091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.964 [2024-07-12 12:31:32.911412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:07.222  Copying: 512/512 [B] (average 500 kBps) 00:07:07.222 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ vthk9f69tuwsw15f4s5sccuu64c1ie80psgg4qo8y9dopsk6re7a5omry4t8gxgiqnb7lkepqr64tjhpo23jkdzgw7v0e4q06zmanys1y3xihxp3zvo1s7zwf2jz91m7eu6ul8tendkkb15li1zt694r1eaj6cm361fxi2g63zpxcg0s4c3peea0c2z7f4qt93pqkcdxmy2xge44kqakegcd9iskd509br31boor7rxakacc718n8sktyvjwcswo4joeldnccjc17b8myapkgzdciy6lezl0w8qa2g0hmvhf838s8fkdbiumfa6fzx561s4u20i3hmrxhf1m9plxmqwlub8t7mv1xw6tz0qtjbq4dw8netoh1wgb8ygumf9qr77wmg4tm8apagb4jld416f6qvpm16i0ibnfcj8zi5dkz0jbw3xfg1w4qiq0kq2qgsljt7z4zlrk7m5cgrg327tacqhcpxvzf5fhlh96h52og00d02ygajmlgacpqleh == \v\t\h\k\9\f\6\9\t\u\w\s\w\1\5\f\4\s\5\s\c\c\u\u\6\4\c\1\i\e\8\0\p\s\g\g\4\q\o\8\y\9\d\o\p\s\k\6\r\e\7\a\5\o\m\r\y\4\t\8\g\x\g\i\q\n\b\7\l\k\e\p\q\r\6\4\t\j\h\p\o\2\3\j\k\d\z\g\w\7\v\0\e\4\q\0\6\z\m\a\n\y\s\1\y\3\x\i\h\x\p\3\z\v\o\1\s\7\z\w\f\2\j\z\9\1\m\7\e\u\6\u\l\8\t\e\n\d\k\k\b\1\5\l\i\1\z\t\6\9\4\r\1\e\a\j\6\c\m\3\6\1\f\x\i\2\g\6\3\z\p\x\c\g\0\s\4\c\3\p\e\e\a\0\c\2\z\7\f\4\q\t\9\3\p\q\k\c\d\x\m\y\2\x\g\e\4\4\k\q\a\k\e\g\c\d\9\i\s\k\d\5\0\9\b\r\3\1\b\o\o\r\7\r\x\a\k\a\c\c\7\1\8\n\8\s\k\t\y\v\j\w\c\s\w\o\4\j\o\e\l\d\n\c\c\j\c\1\7\b\8\m\y\a\p\k\g\z\d\c\i\y\6\l\e\z\l\0\w\8\q\a\2\g\0\h\m\v\h\f\8\3\8\s\8\f\k\d\b\i\u\m\f\a\6\f\z\x\5\6\1\s\4\u\2\0\i\3\h\m\r\x\h\f\1\m\9\p\l\x\m\q\w\l\u\b\8\t\7\m\v\1\x\w\6\t\z\0\q\t\j\b\q\4\d\w\8\n\e\t\o\h\1\w\g\b\8\y\g\u\m\f\9\q\r\7\7\w\m\g\4\t\m\8\a\p\a\g\b\4\j\l\d\4\1\6\f\6\q\v\p\m\1\6\i\0\i\b\n\f\c\j\8\z\i\5\d\k\z\0\j\b\w\3\x\f\g\1\w\4\q\i\q\0\k\q\2\q\g\s\l\j\t\7\z\4\z\l\r\k\7\m\5\c\g\r\g\3\2\7\t\a\c\q\h\c\p\x\v\z\f\5\f\h\l\h\9\6\h\5\2\o\g\0\0\d\0\2\y\g\a\j\m\l\g\a\c\p\q\l\e\h ]] 00:07:07.222 00:07:07.222 real 0m1.978s 00:07:07.222 user 0m1.150s 00:07:07.222 sys 0m0.482s 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:07.222 ************************************ 00:07:07.222 END TEST dd_flag_nofollow_forced_aio 00:07:07.222 ************************************ 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:07.222 ************************************ 00:07:07.222 START TEST dd_flag_noatime_forced_aio 00:07:07.222 ************************************ 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1720787492 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1720787493 00:07:07.222 12:31:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:08.606 12:31:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:08.606 [2024-07-12 12:31:34.336201] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:08.606 [2024-07-12 12:31:34.336313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63643 ] 00:07:08.606 [2024-07-12 12:31:34.471992] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.606 [2024-07-12 12:31:34.586428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.606 [2024-07-12 12:31:34.645994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:08.865  Copying: 512/512 [B] (average 500 kBps) 00:07:08.865 00:07:08.865 12:31:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:08.865 12:31:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1720787492 )) 00:07:08.865 12:31:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.122 12:31:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1720787493 )) 00:07:09.122 12:31:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:09.122 [2024-07-12 12:31:34.999449] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:09.122 [2024-07-12 12:31:34.999617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63655 ] 00:07:09.122 [2024-07-12 12:31:35.140762] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.380 [2024-07-12 12:31:35.262897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.380 [2024-07-12 12:31:35.322309] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:09.639  Copying: 512/512 [B] (average 500 kBps) 00:07:09.639 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1720787495 )) 00:07:09.639 00:07:09.639 real 0m2.339s 00:07:09.639 user 0m0.769s 00:07:09.639 sys 0m0.329s 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.639 ************************************ 00:07:09.639 END TEST dd_flag_noatime_forced_aio 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:09.639 ************************************ 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:09.639 ************************************ 00:07:09.639 START TEST dd_flags_misc_forced_aio 00:07:09.639 ************************************ 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:09.639 12:31:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:09.897 [2024-07-12 12:31:35.723445] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:09.897 [2024-07-12 12:31:35.723543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63681 ] 00:07:09.897 [2024-07-12 12:31:35.861429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.155 [2024-07-12 12:31:35.983428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.155 [2024-07-12 12:31:36.043038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.413  Copying: 512/512 [B] (average 500 kBps) 00:07:10.413 00:07:10.413 12:31:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0uveow823mw90auo3fq1wlcv68mdtlx8urrwh8toxu8k9jz4v3nv0oh8i23suggqbirqj2ilp16jlryh02a64n186vwa0bkle7m9dhxh1qfxh3mnqsv4nb4igq26l06v8c2be8ygdsuvr8rjspefvsotovyodfzru311ght1c6bfalf9vf9lfaidodeius5i7e0phlwg4uxlw21grtwo8ay9q8wdwlcw1w7byzsp0qtpo7ioxu9co9493n05z098lr4f9713x7hnimhnpf2elsd23zmix3pyts703i5w7jh97lhcl0sh4alw8boalsyy303ac7yjymeg1gjve1gpa2446ieham4qoogfzhmywf1vdnapuwsho2cnzap87dgnz8wzgp0y7dz1i35urs6n092od1bwa61ydxd04h5fvxo8brlu2rjzcypfot948m3badqt9u8our5lrqdk48dtss2sgg3y9xh36lm4ttz0y6rxjn20fvmcros3mhzxr376 == \0\u\v\e\o\w\8\2\3\m\w\9\0\a\u\o\3\f\q\1\w\l\c\v\6\8\m\d\t\l\x\8\u\r\r\w\h\8\t\o\x\u\8\k\9\j\z\4\v\3\n\v\0\o\h\8\i\2\3\s\u\g\g\q\b\i\r\q\j\2\i\l\p\1\6\j\l\r\y\h\0\2\a\6\4\n\1\8\6\v\w\a\0\b\k\l\e\7\m\9\d\h\x\h\1\q\f\x\h\3\m\n\q\s\v\4\n\b\4\i\g\q\2\6\l\0\6\v\8\c\2\b\e\8\y\g\d\s\u\v\r\8\r\j\s\p\e\f\v\s\o\t\o\v\y\o\d\f\z\r\u\3\1\1\g\h\t\1\c\6\b\f\a\l\f\9\v\f\9\l\f\a\i\d\o\d\e\i\u\s\5\i\7\e\0\p\h\l\w\g\4\u\x\l\w\2\1\g\r\t\w\o\8\a\y\9\q\8\w\d\w\l\c\w\1\w\7\b\y\z\s\p\0\q\t\p\o\7\i\o\x\u\9\c\o\9\4\9\3\n\0\5\z\0\9\8\l\r\4\f\9\7\1\3\x\7\h\n\i\m\h\n\p\f\2\e\l\s\d\2\3\z\m\i\x\3\p\y\t\s\7\0\3\i\5\w\7\j\h\9\7\l\h\c\l\0\s\h\4\a\l\w\8\b\o\a\l\s\y\y\3\0\3\a\c\7\y\j\y\m\e\g\1\g\j\v\e\1\g\p\a\2\4\4\6\i\e\h\a\m\4\q\o\o\g\f\z\h\m\y\w\f\1\v\d\n\a\p\u\w\s\h\o\2\c\n\z\a\p\8\7\d\g\n\z\8\w\z\g\p\0\y\7\d\z\1\i\3\5\u\r\s\6\n\0\9\2\o\d\1\b\w\a\6\1\y\d\x\d\0\4\h\5\f\v\x\o\8\b\r\l\u\2\r\j\z\c\y\p\f\o\t\9\4\8\m\3\b\a\d\q\t\9\u\8\o\u\r\5\l\r\q\d\k\4\8\d\t\s\s\2\s\g\g\3\y\9\x\h\3\6\l\m\4\t\t\z\0\y\6\r\x\j\n\2\0\f\v\m\c\r\o\s\3\m\h\z\x\r\3\7\6 ]] 00:07:10.413 12:31:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:10.413 12:31:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:10.413 [2024-07-12 12:31:36.387116] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:10.413 [2024-07-12 12:31:36.387226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63694 ] 00:07:10.670 [2024-07-12 12:31:36.530035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.670 [2024-07-12 12:31:36.649835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.670 [2024-07-12 12:31:36.709811] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:10.929  Copying: 512/512 [B] (average 500 kBps) 00:07:10.929 00:07:10.929 12:31:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0uveow823mw90auo3fq1wlcv68mdtlx8urrwh8toxu8k9jz4v3nv0oh8i23suggqbirqj2ilp16jlryh02a64n186vwa0bkle7m9dhxh1qfxh3mnqsv4nb4igq26l06v8c2be8ygdsuvr8rjspefvsotovyodfzru311ght1c6bfalf9vf9lfaidodeius5i7e0phlwg4uxlw21grtwo8ay9q8wdwlcw1w7byzsp0qtpo7ioxu9co9493n05z098lr4f9713x7hnimhnpf2elsd23zmix3pyts703i5w7jh97lhcl0sh4alw8boalsyy303ac7yjymeg1gjve1gpa2446ieham4qoogfzhmywf1vdnapuwsho2cnzap87dgnz8wzgp0y7dz1i35urs6n092od1bwa61ydxd04h5fvxo8brlu2rjzcypfot948m3badqt9u8our5lrqdk48dtss2sgg3y9xh36lm4ttz0y6rxjn20fvmcros3mhzxr376 == \0\u\v\e\o\w\8\2\3\m\w\9\0\a\u\o\3\f\q\1\w\l\c\v\6\8\m\d\t\l\x\8\u\r\r\w\h\8\t\o\x\u\8\k\9\j\z\4\v\3\n\v\0\o\h\8\i\2\3\s\u\g\g\q\b\i\r\q\j\2\i\l\p\1\6\j\l\r\y\h\0\2\a\6\4\n\1\8\6\v\w\a\0\b\k\l\e\7\m\9\d\h\x\h\1\q\f\x\h\3\m\n\q\s\v\4\n\b\4\i\g\q\2\6\l\0\6\v\8\c\2\b\e\8\y\g\d\s\u\v\r\8\r\j\s\p\e\f\v\s\o\t\o\v\y\o\d\f\z\r\u\3\1\1\g\h\t\1\c\6\b\f\a\l\f\9\v\f\9\l\f\a\i\d\o\d\e\i\u\s\5\i\7\e\0\p\h\l\w\g\4\u\x\l\w\2\1\g\r\t\w\o\8\a\y\9\q\8\w\d\w\l\c\w\1\w\7\b\y\z\s\p\0\q\t\p\o\7\i\o\x\u\9\c\o\9\4\9\3\n\0\5\z\0\9\8\l\r\4\f\9\7\1\3\x\7\h\n\i\m\h\n\p\f\2\e\l\s\d\2\3\z\m\i\x\3\p\y\t\s\7\0\3\i\5\w\7\j\h\9\7\l\h\c\l\0\s\h\4\a\l\w\8\b\o\a\l\s\y\y\3\0\3\a\c\7\y\j\y\m\e\g\1\g\j\v\e\1\g\p\a\2\4\4\6\i\e\h\a\m\4\q\o\o\g\f\z\h\m\y\w\f\1\v\d\n\a\p\u\w\s\h\o\2\c\n\z\a\p\8\7\d\g\n\z\8\w\z\g\p\0\y\7\d\z\1\i\3\5\u\r\s\6\n\0\9\2\o\d\1\b\w\a\6\1\y\d\x\d\0\4\h\5\f\v\x\o\8\b\r\l\u\2\r\j\z\c\y\p\f\o\t\9\4\8\m\3\b\a\d\q\t\9\u\8\o\u\r\5\l\r\q\d\k\4\8\d\t\s\s\2\s\g\g\3\y\9\x\h\3\6\l\m\4\t\t\z\0\y\6\r\x\j\n\2\0\f\v\m\c\r\o\s\3\m\h\z\x\r\3\7\6 ]] 00:07:10.929 12:31:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:10.929 12:31:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:11.186 [2024-07-12 12:31:37.055391] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:11.186 [2024-07-12 12:31:37.055514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63702 ] 00:07:11.186 [2024-07-12 12:31:37.192119] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.445 [2024-07-12 12:31:37.312964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.446 [2024-07-12 12:31:37.370268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:11.706  Copying: 512/512 [B] (average 166 kBps) 00:07:11.706 00:07:11.706 12:31:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0uveow823mw90auo3fq1wlcv68mdtlx8urrwh8toxu8k9jz4v3nv0oh8i23suggqbirqj2ilp16jlryh02a64n186vwa0bkle7m9dhxh1qfxh3mnqsv4nb4igq26l06v8c2be8ygdsuvr8rjspefvsotovyodfzru311ght1c6bfalf9vf9lfaidodeius5i7e0phlwg4uxlw21grtwo8ay9q8wdwlcw1w7byzsp0qtpo7ioxu9co9493n05z098lr4f9713x7hnimhnpf2elsd23zmix3pyts703i5w7jh97lhcl0sh4alw8boalsyy303ac7yjymeg1gjve1gpa2446ieham4qoogfzhmywf1vdnapuwsho2cnzap87dgnz8wzgp0y7dz1i35urs6n092od1bwa61ydxd04h5fvxo8brlu2rjzcypfot948m3badqt9u8our5lrqdk48dtss2sgg3y9xh36lm4ttz0y6rxjn20fvmcros3mhzxr376 == \0\u\v\e\o\w\8\2\3\m\w\9\0\a\u\o\3\f\q\1\w\l\c\v\6\8\m\d\t\l\x\8\u\r\r\w\h\8\t\o\x\u\8\k\9\j\z\4\v\3\n\v\0\o\h\8\i\2\3\s\u\g\g\q\b\i\r\q\j\2\i\l\p\1\6\j\l\r\y\h\0\2\a\6\4\n\1\8\6\v\w\a\0\b\k\l\e\7\m\9\d\h\x\h\1\q\f\x\h\3\m\n\q\s\v\4\n\b\4\i\g\q\2\6\l\0\6\v\8\c\2\b\e\8\y\g\d\s\u\v\r\8\r\j\s\p\e\f\v\s\o\t\o\v\y\o\d\f\z\r\u\3\1\1\g\h\t\1\c\6\b\f\a\l\f\9\v\f\9\l\f\a\i\d\o\d\e\i\u\s\5\i\7\e\0\p\h\l\w\g\4\u\x\l\w\2\1\g\r\t\w\o\8\a\y\9\q\8\w\d\w\l\c\w\1\w\7\b\y\z\s\p\0\q\t\p\o\7\i\o\x\u\9\c\o\9\4\9\3\n\0\5\z\0\9\8\l\r\4\f\9\7\1\3\x\7\h\n\i\m\h\n\p\f\2\e\l\s\d\2\3\z\m\i\x\3\p\y\t\s\7\0\3\i\5\w\7\j\h\9\7\l\h\c\l\0\s\h\4\a\l\w\8\b\o\a\l\s\y\y\3\0\3\a\c\7\y\j\y\m\e\g\1\g\j\v\e\1\g\p\a\2\4\4\6\i\e\h\a\m\4\q\o\o\g\f\z\h\m\y\w\f\1\v\d\n\a\p\u\w\s\h\o\2\c\n\z\a\p\8\7\d\g\n\z\8\w\z\g\p\0\y\7\d\z\1\i\3\5\u\r\s\6\n\0\9\2\o\d\1\b\w\a\6\1\y\d\x\d\0\4\h\5\f\v\x\o\8\b\r\l\u\2\r\j\z\c\y\p\f\o\t\9\4\8\m\3\b\a\d\q\t\9\u\8\o\u\r\5\l\r\q\d\k\4\8\d\t\s\s\2\s\g\g\3\y\9\x\h\3\6\l\m\4\t\t\z\0\y\6\r\x\j\n\2\0\f\v\m\c\r\o\s\3\m\h\z\x\r\3\7\6 ]] 00:07:11.706 12:31:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:11.706 12:31:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:11.706 [2024-07-12 12:31:37.711662] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:11.706 [2024-07-12 12:31:37.711764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63709 ] 00:07:11.966 [2024-07-12 12:31:37.848935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.966 [2024-07-12 12:31:37.969647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.966 [2024-07-12 12:31:38.024369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.225  Copying: 512/512 [B] (average 500 kBps) 00:07:12.225 00:07:12.225 12:31:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0uveow823mw90auo3fq1wlcv68mdtlx8urrwh8toxu8k9jz4v3nv0oh8i23suggqbirqj2ilp16jlryh02a64n186vwa0bkle7m9dhxh1qfxh3mnqsv4nb4igq26l06v8c2be8ygdsuvr8rjspefvsotovyodfzru311ght1c6bfalf9vf9lfaidodeius5i7e0phlwg4uxlw21grtwo8ay9q8wdwlcw1w7byzsp0qtpo7ioxu9co9493n05z098lr4f9713x7hnimhnpf2elsd23zmix3pyts703i5w7jh97lhcl0sh4alw8boalsyy303ac7yjymeg1gjve1gpa2446ieham4qoogfzhmywf1vdnapuwsho2cnzap87dgnz8wzgp0y7dz1i35urs6n092od1bwa61ydxd04h5fvxo8brlu2rjzcypfot948m3badqt9u8our5lrqdk48dtss2sgg3y9xh36lm4ttz0y6rxjn20fvmcros3mhzxr376 == \0\u\v\e\o\w\8\2\3\m\w\9\0\a\u\o\3\f\q\1\w\l\c\v\6\8\m\d\t\l\x\8\u\r\r\w\h\8\t\o\x\u\8\k\9\j\z\4\v\3\n\v\0\o\h\8\i\2\3\s\u\g\g\q\b\i\r\q\j\2\i\l\p\1\6\j\l\r\y\h\0\2\a\6\4\n\1\8\6\v\w\a\0\b\k\l\e\7\m\9\d\h\x\h\1\q\f\x\h\3\m\n\q\s\v\4\n\b\4\i\g\q\2\6\l\0\6\v\8\c\2\b\e\8\y\g\d\s\u\v\r\8\r\j\s\p\e\f\v\s\o\t\o\v\y\o\d\f\z\r\u\3\1\1\g\h\t\1\c\6\b\f\a\l\f\9\v\f\9\l\f\a\i\d\o\d\e\i\u\s\5\i\7\e\0\p\h\l\w\g\4\u\x\l\w\2\1\g\r\t\w\o\8\a\y\9\q\8\w\d\w\l\c\w\1\w\7\b\y\z\s\p\0\q\t\p\o\7\i\o\x\u\9\c\o\9\4\9\3\n\0\5\z\0\9\8\l\r\4\f\9\7\1\3\x\7\h\n\i\m\h\n\p\f\2\e\l\s\d\2\3\z\m\i\x\3\p\y\t\s\7\0\3\i\5\w\7\j\h\9\7\l\h\c\l\0\s\h\4\a\l\w\8\b\o\a\l\s\y\y\3\0\3\a\c\7\y\j\y\m\e\g\1\g\j\v\e\1\g\p\a\2\4\4\6\i\e\h\a\m\4\q\o\o\g\f\z\h\m\y\w\f\1\v\d\n\a\p\u\w\s\h\o\2\c\n\z\a\p\8\7\d\g\n\z\8\w\z\g\p\0\y\7\d\z\1\i\3\5\u\r\s\6\n\0\9\2\o\d\1\b\w\a\6\1\y\d\x\d\0\4\h\5\f\v\x\o\8\b\r\l\u\2\r\j\z\c\y\p\f\o\t\9\4\8\m\3\b\a\d\q\t\9\u\8\o\u\r\5\l\r\q\d\k\4\8\d\t\s\s\2\s\g\g\3\y\9\x\h\3\6\l\m\4\t\t\z\0\y\6\r\x\j\n\2\0\f\v\m\c\r\o\s\3\m\h\z\x\r\3\7\6 ]] 00:07:12.225 12:31:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:12.225 12:31:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:12.225 12:31:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:12.225 12:31:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:12.482 12:31:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.482 12:31:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:12.482 [2024-07-12 12:31:38.380289] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:12.482 [2024-07-12 12:31:38.380483] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63717 ] 00:07:12.482 [2024-07-12 12:31:38.526043] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.741 [2024-07-12 12:31:38.649185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.741 [2024-07-12 12:31:38.703628] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:12.999  Copying: 512/512 [B] (average 500 kBps) 00:07:12.999 00:07:12.999 12:31:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ul17ujch14kyt3kak0bqw7mlx56kbd3r6l5hb91ppoccbqzofa1oo8ur7s45j0ksze1bqzu6o2itgjd5c8gcmvzzwozjyzavg8e1ri2owdo6nn9tdte0cpfnqnue6ns0xcso0kxcg9c7ri0m52rdnb7th5lth7ukj67zml56zaclzoj2d5vwplaao6cz98ne20ejieexct76wshoazxjzwkk5oq3hehw03m5xh1z325ofsiboi4zv914674e6izoz77bkuvmqhfp4ahkzwmyin80j2mvzf6eq3i0uihox6nf02ljp4tben7904wens9rayi3k5slehaeu6q8tnsyum3596fx6l7hta6aty9uame9bampv52jdpkjuv3opjxp00p7t2ykq2f8y7snc0myr984j24uazq0vq1aipbrwarm0fo5d8nwx49s9017hxfryj9b6l3gaujy1zr6ekyel39q8yy8bo3odzpfrc3b3v1sbvgnx2wkud3lbt31cj3p == \u\l\1\7\u\j\c\h\1\4\k\y\t\3\k\a\k\0\b\q\w\7\m\l\x\5\6\k\b\d\3\r\6\l\5\h\b\9\1\p\p\o\c\c\b\q\z\o\f\a\1\o\o\8\u\r\7\s\4\5\j\0\k\s\z\e\1\b\q\z\u\6\o\2\i\t\g\j\d\5\c\8\g\c\m\v\z\z\w\o\z\j\y\z\a\v\g\8\e\1\r\i\2\o\w\d\o\6\n\n\9\t\d\t\e\0\c\p\f\n\q\n\u\e\6\n\s\0\x\c\s\o\0\k\x\c\g\9\c\7\r\i\0\m\5\2\r\d\n\b\7\t\h\5\l\t\h\7\u\k\j\6\7\z\m\l\5\6\z\a\c\l\z\o\j\2\d\5\v\w\p\l\a\a\o\6\c\z\9\8\n\e\2\0\e\j\i\e\e\x\c\t\7\6\w\s\h\o\a\z\x\j\z\w\k\k\5\o\q\3\h\e\h\w\0\3\m\5\x\h\1\z\3\2\5\o\f\s\i\b\o\i\4\z\v\9\1\4\6\7\4\e\6\i\z\o\z\7\7\b\k\u\v\m\q\h\f\p\4\a\h\k\z\w\m\y\i\n\8\0\j\2\m\v\z\f\6\e\q\3\i\0\u\i\h\o\x\6\n\f\0\2\l\j\p\4\t\b\e\n\7\9\0\4\w\e\n\s\9\r\a\y\i\3\k\5\s\l\e\h\a\e\u\6\q\8\t\n\s\y\u\m\3\5\9\6\f\x\6\l\7\h\t\a\6\a\t\y\9\u\a\m\e\9\b\a\m\p\v\5\2\j\d\p\k\j\u\v\3\o\p\j\x\p\0\0\p\7\t\2\y\k\q\2\f\8\y\7\s\n\c\0\m\y\r\9\8\4\j\2\4\u\a\z\q\0\v\q\1\a\i\p\b\r\w\a\r\m\0\f\o\5\d\8\n\w\x\4\9\s\9\0\1\7\h\x\f\r\y\j\9\b\6\l\3\g\a\u\j\y\1\z\r\6\e\k\y\e\l\3\9\q\8\y\y\8\b\o\3\o\d\z\p\f\r\c\3\b\3\v\1\s\b\v\g\n\x\2\w\k\u\d\3\l\b\t\3\1\c\j\3\p ]] 00:07:12.999 12:31:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:12.999 12:31:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:12.999 [2024-07-12 12:31:39.044001] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:12.999 [2024-07-12 12:31:39.044137] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63730 ] 00:07:13.339 [2024-07-12 12:31:39.186490] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.339 [2024-07-12 12:31:39.300360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.339 [2024-07-12 12:31:39.357976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:13.598  Copying: 512/512 [B] (average 500 kBps) 00:07:13.598 00:07:13.598 12:31:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ul17ujch14kyt3kak0bqw7mlx56kbd3r6l5hb91ppoccbqzofa1oo8ur7s45j0ksze1bqzu6o2itgjd5c8gcmvzzwozjyzavg8e1ri2owdo6nn9tdte0cpfnqnue6ns0xcso0kxcg9c7ri0m52rdnb7th5lth7ukj67zml56zaclzoj2d5vwplaao6cz98ne20ejieexct76wshoazxjzwkk5oq3hehw03m5xh1z325ofsiboi4zv914674e6izoz77bkuvmqhfp4ahkzwmyin80j2mvzf6eq3i0uihox6nf02ljp4tben7904wens9rayi3k5slehaeu6q8tnsyum3596fx6l7hta6aty9uame9bampv52jdpkjuv3opjxp00p7t2ykq2f8y7snc0myr984j24uazq0vq1aipbrwarm0fo5d8nwx49s9017hxfryj9b6l3gaujy1zr6ekyel39q8yy8bo3odzpfrc3b3v1sbvgnx2wkud3lbt31cj3p == \u\l\1\7\u\j\c\h\1\4\k\y\t\3\k\a\k\0\b\q\w\7\m\l\x\5\6\k\b\d\3\r\6\l\5\h\b\9\1\p\p\o\c\c\b\q\z\o\f\a\1\o\o\8\u\r\7\s\4\5\j\0\k\s\z\e\1\b\q\z\u\6\o\2\i\t\g\j\d\5\c\8\g\c\m\v\z\z\w\o\z\j\y\z\a\v\g\8\e\1\r\i\2\o\w\d\o\6\n\n\9\t\d\t\e\0\c\p\f\n\q\n\u\e\6\n\s\0\x\c\s\o\0\k\x\c\g\9\c\7\r\i\0\m\5\2\r\d\n\b\7\t\h\5\l\t\h\7\u\k\j\6\7\z\m\l\5\6\z\a\c\l\z\o\j\2\d\5\v\w\p\l\a\a\o\6\c\z\9\8\n\e\2\0\e\j\i\e\e\x\c\t\7\6\w\s\h\o\a\z\x\j\z\w\k\k\5\o\q\3\h\e\h\w\0\3\m\5\x\h\1\z\3\2\5\o\f\s\i\b\o\i\4\z\v\9\1\4\6\7\4\e\6\i\z\o\z\7\7\b\k\u\v\m\q\h\f\p\4\a\h\k\z\w\m\y\i\n\8\0\j\2\m\v\z\f\6\e\q\3\i\0\u\i\h\o\x\6\n\f\0\2\l\j\p\4\t\b\e\n\7\9\0\4\w\e\n\s\9\r\a\y\i\3\k\5\s\l\e\h\a\e\u\6\q\8\t\n\s\y\u\m\3\5\9\6\f\x\6\l\7\h\t\a\6\a\t\y\9\u\a\m\e\9\b\a\m\p\v\5\2\j\d\p\k\j\u\v\3\o\p\j\x\p\0\0\p\7\t\2\y\k\q\2\f\8\y\7\s\n\c\0\m\y\r\9\8\4\j\2\4\u\a\z\q\0\v\q\1\a\i\p\b\r\w\a\r\m\0\f\o\5\d\8\n\w\x\4\9\s\9\0\1\7\h\x\f\r\y\j\9\b\6\l\3\g\a\u\j\y\1\z\r\6\e\k\y\e\l\3\9\q\8\y\y\8\b\o\3\o\d\z\p\f\r\c\3\b\3\v\1\s\b\v\g\n\x\2\w\k\u\d\3\l\b\t\3\1\c\j\3\p ]] 00:07:13.598 12:31:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:13.598 12:31:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:13.856 [2024-07-12 12:31:39.679325] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:13.856 [2024-07-12 12:31:39.679434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63737 ] 00:07:13.856 [2024-07-12 12:31:39.814229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.115 [2024-07-12 12:31:39.936725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.115 [2024-07-12 12:31:39.995215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.374  Copying: 512/512 [B] (average 250 kBps) 00:07:14.374 00:07:14.374 12:31:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ul17ujch14kyt3kak0bqw7mlx56kbd3r6l5hb91ppoccbqzofa1oo8ur7s45j0ksze1bqzu6o2itgjd5c8gcmvzzwozjyzavg8e1ri2owdo6nn9tdte0cpfnqnue6ns0xcso0kxcg9c7ri0m52rdnb7th5lth7ukj67zml56zaclzoj2d5vwplaao6cz98ne20ejieexct76wshoazxjzwkk5oq3hehw03m5xh1z325ofsiboi4zv914674e6izoz77bkuvmqhfp4ahkzwmyin80j2mvzf6eq3i0uihox6nf02ljp4tben7904wens9rayi3k5slehaeu6q8tnsyum3596fx6l7hta6aty9uame9bampv52jdpkjuv3opjxp00p7t2ykq2f8y7snc0myr984j24uazq0vq1aipbrwarm0fo5d8nwx49s9017hxfryj9b6l3gaujy1zr6ekyel39q8yy8bo3odzpfrc3b3v1sbvgnx2wkud3lbt31cj3p == \u\l\1\7\u\j\c\h\1\4\k\y\t\3\k\a\k\0\b\q\w\7\m\l\x\5\6\k\b\d\3\r\6\l\5\h\b\9\1\p\p\o\c\c\b\q\z\o\f\a\1\o\o\8\u\r\7\s\4\5\j\0\k\s\z\e\1\b\q\z\u\6\o\2\i\t\g\j\d\5\c\8\g\c\m\v\z\z\w\o\z\j\y\z\a\v\g\8\e\1\r\i\2\o\w\d\o\6\n\n\9\t\d\t\e\0\c\p\f\n\q\n\u\e\6\n\s\0\x\c\s\o\0\k\x\c\g\9\c\7\r\i\0\m\5\2\r\d\n\b\7\t\h\5\l\t\h\7\u\k\j\6\7\z\m\l\5\6\z\a\c\l\z\o\j\2\d\5\v\w\p\l\a\a\o\6\c\z\9\8\n\e\2\0\e\j\i\e\e\x\c\t\7\6\w\s\h\o\a\z\x\j\z\w\k\k\5\o\q\3\h\e\h\w\0\3\m\5\x\h\1\z\3\2\5\o\f\s\i\b\o\i\4\z\v\9\1\4\6\7\4\e\6\i\z\o\z\7\7\b\k\u\v\m\q\h\f\p\4\a\h\k\z\w\m\y\i\n\8\0\j\2\m\v\z\f\6\e\q\3\i\0\u\i\h\o\x\6\n\f\0\2\l\j\p\4\t\b\e\n\7\9\0\4\w\e\n\s\9\r\a\y\i\3\k\5\s\l\e\h\a\e\u\6\q\8\t\n\s\y\u\m\3\5\9\6\f\x\6\l\7\h\t\a\6\a\t\y\9\u\a\m\e\9\b\a\m\p\v\5\2\j\d\p\k\j\u\v\3\o\p\j\x\p\0\0\p\7\t\2\y\k\q\2\f\8\y\7\s\n\c\0\m\y\r\9\8\4\j\2\4\u\a\z\q\0\v\q\1\a\i\p\b\r\w\a\r\m\0\f\o\5\d\8\n\w\x\4\9\s\9\0\1\7\h\x\f\r\y\j\9\b\6\l\3\g\a\u\j\y\1\z\r\6\e\k\y\e\l\3\9\q\8\y\y\8\b\o\3\o\d\z\p\f\r\c\3\b\3\v\1\s\b\v\g\n\x\2\w\k\u\d\3\l\b\t\3\1\c\j\3\p ]] 00:07:14.374 12:31:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:14.374 12:31:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:14.374 [2024-07-12 12:31:40.318277] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:14.374 [2024-07-12 12:31:40.318391] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63745 ] 00:07:14.632 [2024-07-12 12:31:40.451862] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.632 [2024-07-12 12:31:40.571787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.632 [2024-07-12 12:31:40.628132] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:14.891  Copying: 512/512 [B] (average 166 kBps) 00:07:14.891 00:07:14.891 12:31:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ul17ujch14kyt3kak0bqw7mlx56kbd3r6l5hb91ppoccbqzofa1oo8ur7s45j0ksze1bqzu6o2itgjd5c8gcmvzzwozjyzavg8e1ri2owdo6nn9tdte0cpfnqnue6ns0xcso0kxcg9c7ri0m52rdnb7th5lth7ukj67zml56zaclzoj2d5vwplaao6cz98ne20ejieexct76wshoazxjzwkk5oq3hehw03m5xh1z325ofsiboi4zv914674e6izoz77bkuvmqhfp4ahkzwmyin80j2mvzf6eq3i0uihox6nf02ljp4tben7904wens9rayi3k5slehaeu6q8tnsyum3596fx6l7hta6aty9uame9bampv52jdpkjuv3opjxp00p7t2ykq2f8y7snc0myr984j24uazq0vq1aipbrwarm0fo5d8nwx49s9017hxfryj9b6l3gaujy1zr6ekyel39q8yy8bo3odzpfrc3b3v1sbvgnx2wkud3lbt31cj3p == \u\l\1\7\u\j\c\h\1\4\k\y\t\3\k\a\k\0\b\q\w\7\m\l\x\5\6\k\b\d\3\r\6\l\5\h\b\9\1\p\p\o\c\c\b\q\z\o\f\a\1\o\o\8\u\r\7\s\4\5\j\0\k\s\z\e\1\b\q\z\u\6\o\2\i\t\g\j\d\5\c\8\g\c\m\v\z\z\w\o\z\j\y\z\a\v\g\8\e\1\r\i\2\o\w\d\o\6\n\n\9\t\d\t\e\0\c\p\f\n\q\n\u\e\6\n\s\0\x\c\s\o\0\k\x\c\g\9\c\7\r\i\0\m\5\2\r\d\n\b\7\t\h\5\l\t\h\7\u\k\j\6\7\z\m\l\5\6\z\a\c\l\z\o\j\2\d\5\v\w\p\l\a\a\o\6\c\z\9\8\n\e\2\0\e\j\i\e\e\x\c\t\7\6\w\s\h\o\a\z\x\j\z\w\k\k\5\o\q\3\h\e\h\w\0\3\m\5\x\h\1\z\3\2\5\o\f\s\i\b\o\i\4\z\v\9\1\4\6\7\4\e\6\i\z\o\z\7\7\b\k\u\v\m\q\h\f\p\4\a\h\k\z\w\m\y\i\n\8\0\j\2\m\v\z\f\6\e\q\3\i\0\u\i\h\o\x\6\n\f\0\2\l\j\p\4\t\b\e\n\7\9\0\4\w\e\n\s\9\r\a\y\i\3\k\5\s\l\e\h\a\e\u\6\q\8\t\n\s\y\u\m\3\5\9\6\f\x\6\l\7\h\t\a\6\a\t\y\9\u\a\m\e\9\b\a\m\p\v\5\2\j\d\p\k\j\u\v\3\o\p\j\x\p\0\0\p\7\t\2\y\k\q\2\f\8\y\7\s\n\c\0\m\y\r\9\8\4\j\2\4\u\a\z\q\0\v\q\1\a\i\p\b\r\w\a\r\m\0\f\o\5\d\8\n\w\x\4\9\s\9\0\1\7\h\x\f\r\y\j\9\b\6\l\3\g\a\u\j\y\1\z\r\6\e\k\y\e\l\3\9\q\8\y\y\8\b\o\3\o\d\z\p\f\r\c\3\b\3\v\1\s\b\v\g\n\x\2\w\k\u\d\3\l\b\t\3\1\c\j\3\p ]] 00:07:14.891 00:07:14.891 real 0m5.292s 00:07:14.891 user 0m3.077s 00:07:14.891 sys 0m1.238s 00:07:14.891 12:31:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.891 12:31:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:14.891 ************************************ 00:07:14.891 END TEST dd_flags_misc_forced_aio 00:07:14.891 ************************************ 00:07:15.149 12:31:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:07:15.149 12:31:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:15.149 12:31:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:15.149 12:31:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:15.149 00:07:15.149 real 0m23.281s 00:07:15.149 user 0m12.261s 00:07:15.149 sys 0m6.933s 00:07:15.149 12:31:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.149 12:31:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:15.149 ************************************ 00:07:15.149 END TEST spdk_dd_posix 00:07:15.149 ************************************ 00:07:15.149 12:31:41 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:15.149 12:31:41 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:15.149 12:31:41 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.149 12:31:41 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.149 12:31:41 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:15.149 ************************************ 00:07:15.149 START TEST spdk_dd_malloc 00:07:15.149 ************************************ 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:15.149 * Looking for test storage... 00:07:15.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:15.149 ************************************ 00:07:15.149 START TEST dd_malloc_copy 00:07:15.149 ************************************ 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:15.149 12:31:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:15.149 [2024-07-12 12:31:41.193706] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:15.149 [2024-07-12 12:31:41.193802] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63819 ] 00:07:15.149 { 00:07:15.149 "subsystems": [ 00:07:15.149 { 00:07:15.149 "subsystem": "bdev", 00:07:15.149 "config": [ 00:07:15.149 { 00:07:15.149 "params": { 00:07:15.149 "block_size": 512, 00:07:15.149 "num_blocks": 1048576, 00:07:15.149 "name": "malloc0" 00:07:15.149 }, 00:07:15.149 "method": "bdev_malloc_create" 00:07:15.149 }, 00:07:15.149 { 00:07:15.149 "params": { 00:07:15.149 "block_size": 512, 00:07:15.149 "num_blocks": 1048576, 00:07:15.149 "name": "malloc1" 00:07:15.149 }, 00:07:15.149 "method": "bdev_malloc_create" 00:07:15.149 }, 00:07:15.149 { 00:07:15.149 "method": "bdev_wait_for_examine" 00:07:15.149 } 00:07:15.149 ] 00:07:15.149 } 00:07:15.149 ] 00:07:15.149 } 00:07:15.408 [2024-07-12 12:31:41.330535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.408 [2024-07-12 12:31:41.454660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.666 [2024-07-12 12:31:41.517105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:19.479  Copying: 195/512 [MB] (195 MBps) Copying: 391/512 [MB] (196 MBps) Copying: 512/512 [MB] (average 197 MBps) 00:07:19.479 00:07:19.479 12:31:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:07:19.479 12:31:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:07:19.479 12:31:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:19.479 12:31:45 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:19.479 [2024-07-12 12:31:45.250135] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:19.479 [2024-07-12 12:31:45.250236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63872 ] 00:07:19.479 { 00:07:19.479 "subsystems": [ 00:07:19.479 { 00:07:19.479 "subsystem": "bdev", 00:07:19.479 "config": [ 00:07:19.479 { 00:07:19.479 "params": { 00:07:19.479 "block_size": 512, 00:07:19.479 "num_blocks": 1048576, 00:07:19.479 "name": "malloc0" 00:07:19.479 }, 00:07:19.479 "method": "bdev_malloc_create" 00:07:19.479 }, 00:07:19.479 { 00:07:19.479 "params": { 00:07:19.479 "block_size": 512, 00:07:19.479 "num_blocks": 1048576, 00:07:19.479 "name": "malloc1" 00:07:19.479 }, 00:07:19.479 "method": "bdev_malloc_create" 00:07:19.479 }, 00:07:19.479 { 00:07:19.479 "method": "bdev_wait_for_examine" 00:07:19.479 } 00:07:19.479 ] 00:07:19.479 } 00:07:19.479 ] 00:07:19.479 } 00:07:19.479 [2024-07-12 12:31:45.391378] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.479 [2024-07-12 12:31:45.503540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.738 [2024-07-12 12:31:45.565482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.179  Copying: 201/512 [MB] (201 MBps) Copying: 407/512 [MB] (205 MBps) Copying: 512/512 [MB] (average 202 MBps) 00:07:23.179 00:07:23.179 00:07:23.179 real 0m8.009s 00:07:23.179 user 0m6.888s 00:07:23.179 sys 0m0.963s 00:07:23.179 12:31:49 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.179 12:31:49 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:23.179 ************************************ 00:07:23.179 END TEST dd_malloc_copy 00:07:23.179 ************************************ 00:07:23.179 12:31:49 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:07:23.179 00:07:23.179 real 0m8.143s 00:07:23.179 user 0m6.934s 00:07:23.179 sys 0m1.050s 00:07:23.179 12:31:49 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.179 12:31:49 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:23.179 ************************************ 00:07:23.179 END TEST spdk_dd_malloc 00:07:23.179 ************************************ 00:07:23.179 12:31:49 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:23.179 12:31:49 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:23.179 12:31:49 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:23.179 12:31:49 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.179 12:31:49 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:23.179 ************************************ 00:07:23.179 START TEST spdk_dd_bdev_to_bdev 00:07:23.179 ************************************ 00:07:23.179 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:07:23.438 * Looking for test storage... 00:07:23.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:23.438 ************************************ 00:07:23.438 START TEST dd_inflate_file 00:07:23.438 ************************************ 00:07:23.438 12:31:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:07:23.438 [2024-07-12 12:31:49.407170] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:23.438 [2024-07-12 12:31:49.407331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63982 ] 00:07:23.696 [2024-07-12 12:31:49.546246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.697 [2024-07-12 12:31:49.669777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.697 [2024-07-12 12:31:49.726116] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:23.955  Copying: 64/64 [MB] (average 1560 MBps) 00:07:23.955 00:07:24.213 00:07:24.213 real 0m0.694s 00:07:24.213 user 0m0.424s 00:07:24.213 sys 0m0.326s 00:07:24.213 12:31:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.213 ************************************ 00:07:24.213 END TEST dd_inflate_file 00:07:24.213 ************************************ 00:07:24.213 12:31:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:07:24.213 12:31:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:24.213 12:31:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:07:24.213 12:31:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:07:24.213 12:31:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:24.213 12:31:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:07:24.213 12:31:50 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:24.213 12:31:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:24.213 12:31:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.213 12:31:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:24.213 12:31:50 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:24.213 ************************************ 00:07:24.213 START TEST dd_copy_to_out_bdev 00:07:24.213 ************************************ 00:07:24.213 12:31:50 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:07:24.213 [2024-07-12 12:31:50.130625] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:24.213 [2024-07-12 12:31:50.130724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64020 ] 00:07:24.213 { 00:07:24.213 "subsystems": [ 00:07:24.213 { 00:07:24.213 "subsystem": "bdev", 00:07:24.213 "config": [ 00:07:24.213 { 00:07:24.213 "params": { 00:07:24.213 "trtype": "pcie", 00:07:24.213 "traddr": "0000:00:10.0", 00:07:24.213 "name": "Nvme0" 00:07:24.213 }, 00:07:24.213 "method": "bdev_nvme_attach_controller" 00:07:24.213 }, 00:07:24.213 { 00:07:24.213 "params": { 00:07:24.213 "trtype": "pcie", 00:07:24.213 "traddr": "0000:00:11.0", 00:07:24.213 "name": "Nvme1" 00:07:24.213 }, 00:07:24.213 "method": "bdev_nvme_attach_controller" 00:07:24.213 }, 00:07:24.213 { 00:07:24.213 "method": "bdev_wait_for_examine" 00:07:24.213 } 00:07:24.213 ] 00:07:24.213 } 00:07:24.213 ] 00:07:24.213 } 00:07:24.213 [2024-07-12 12:31:50.269744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.471 [2024-07-12 12:31:50.388333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.471 [2024-07-12 12:31:50.445931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.103  Copying: 56/64 [MB] (56 MBps) Copying: 64/64 [MB] (average 56 MBps) 00:07:26.103 00:07:26.103 00:07:26.103 real 0m1.947s 00:07:26.103 user 0m1.703s 00:07:26.103 sys 0m1.505s 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.103 ************************************ 00:07:26.103 END TEST dd_copy_to_out_bdev 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:26.103 ************************************ 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:26.103 ************************************ 00:07:26.103 START TEST dd_offset_magic 00:07:26.103 ************************************ 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:26.103 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:26.104 [2024-07-12 12:31:52.122346] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:26.104 [2024-07-12 12:31:52.122502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64061 ] 00:07:26.104 { 00:07:26.104 "subsystems": [ 00:07:26.104 { 00:07:26.104 "subsystem": "bdev", 00:07:26.104 "config": [ 00:07:26.104 { 00:07:26.104 "params": { 00:07:26.104 "trtype": "pcie", 00:07:26.104 "traddr": "0000:00:10.0", 00:07:26.104 "name": "Nvme0" 00:07:26.104 }, 00:07:26.104 "method": "bdev_nvme_attach_controller" 00:07:26.104 }, 00:07:26.104 { 00:07:26.104 "params": { 00:07:26.104 "trtype": "pcie", 00:07:26.104 "traddr": "0000:00:11.0", 00:07:26.104 "name": "Nvme1" 00:07:26.104 }, 00:07:26.104 "method": "bdev_nvme_attach_controller" 00:07:26.104 }, 00:07:26.104 { 00:07:26.104 "method": "bdev_wait_for_examine" 00:07:26.104 } 00:07:26.104 ] 00:07:26.104 } 00:07:26.104 ] 00:07:26.104 } 00:07:26.385 [2024-07-12 12:31:52.261626] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.385 [2024-07-12 12:31:52.388682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.385 [2024-07-12 12:31:52.451029] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:26.902  Copying: 65/65 [MB] (average 928 MBps) 00:07:26.902 00:07:27.159 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:07:27.159 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:27.159 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:27.159 12:31:52 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:27.159 [2024-07-12 12:31:53.038013] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:27.159 [2024-07-12 12:31:53.038152] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64075 ] 00:07:27.159 { 00:07:27.159 "subsystems": [ 00:07:27.159 { 00:07:27.159 "subsystem": "bdev", 00:07:27.159 "config": [ 00:07:27.159 { 00:07:27.159 "params": { 00:07:27.159 "trtype": "pcie", 00:07:27.159 "traddr": "0000:00:10.0", 00:07:27.159 "name": "Nvme0" 00:07:27.159 }, 00:07:27.159 "method": "bdev_nvme_attach_controller" 00:07:27.159 }, 00:07:27.159 { 00:07:27.159 "params": { 00:07:27.159 "trtype": "pcie", 00:07:27.159 "traddr": "0000:00:11.0", 00:07:27.159 "name": "Nvme1" 00:07:27.159 }, 00:07:27.159 "method": "bdev_nvme_attach_controller" 00:07:27.159 }, 00:07:27.159 { 00:07:27.159 "method": "bdev_wait_for_examine" 00:07:27.159 } 00:07:27.159 ] 00:07:27.159 } 00:07:27.159 ] 00:07:27.159 } 00:07:27.159 [2024-07-12 12:31:53.182515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.416 [2024-07-12 12:31:53.302588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.416 [2024-07-12 12:31:53.359681] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:27.933  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:27.933 00:07:27.933 12:31:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:27.933 12:31:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:27.933 12:31:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:07:27.933 12:31:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:07:27.933 12:31:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:07:27.933 12:31:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:27.933 12:31:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:27.933 [2024-07-12 12:31:53.826277] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:27.933 [2024-07-12 12:31:53.826391] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64097 ] 00:07:27.933 { 00:07:27.933 "subsystems": [ 00:07:27.933 { 00:07:27.933 "subsystem": "bdev", 00:07:27.933 "config": [ 00:07:27.933 { 00:07:27.933 "params": { 00:07:27.933 "trtype": "pcie", 00:07:27.933 "traddr": "0000:00:10.0", 00:07:27.933 "name": "Nvme0" 00:07:27.933 }, 00:07:27.933 "method": "bdev_nvme_attach_controller" 00:07:27.933 }, 00:07:27.933 { 00:07:27.933 "params": { 00:07:27.933 "trtype": "pcie", 00:07:27.933 "traddr": "0000:00:11.0", 00:07:27.933 "name": "Nvme1" 00:07:27.933 }, 00:07:27.933 "method": "bdev_nvme_attach_controller" 00:07:27.933 }, 00:07:27.933 { 00:07:27.933 "method": "bdev_wait_for_examine" 00:07:27.933 } 00:07:27.933 ] 00:07:27.933 } 00:07:27.933 ] 00:07:27.933 } 00:07:27.933 [2024-07-12 12:31:53.967278] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.191 [2024-07-12 12:31:54.100183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.191 [2024-07-12 12:31:54.158817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:28.708  Copying: 65/65 [MB] (average 1015 MBps) 00:07:28.708 00:07:28.708 12:31:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:07:28.708 12:31:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:07:28.708 12:31:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:07:28.708 12:31:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:28.708 [2024-07-12 12:31:54.742990] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:28.708 [2024-07-12 12:31:54.743121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64117 ] 00:07:28.708 { 00:07:28.708 "subsystems": [ 00:07:28.708 { 00:07:28.708 "subsystem": "bdev", 00:07:28.708 "config": [ 00:07:28.708 { 00:07:28.708 "params": { 00:07:28.708 "trtype": "pcie", 00:07:28.708 "traddr": "0000:00:10.0", 00:07:28.708 "name": "Nvme0" 00:07:28.708 }, 00:07:28.708 "method": "bdev_nvme_attach_controller" 00:07:28.708 }, 00:07:28.708 { 00:07:28.708 "params": { 00:07:28.708 "trtype": "pcie", 00:07:28.708 "traddr": "0000:00:11.0", 00:07:28.708 "name": "Nvme1" 00:07:28.708 }, 00:07:28.708 "method": "bdev_nvme_attach_controller" 00:07:28.708 }, 00:07:28.708 { 00:07:28.708 "method": "bdev_wait_for_examine" 00:07:28.708 } 00:07:28.708 ] 00:07:28.708 } 00:07:28.708 ] 00:07:28.708 } 00:07:28.967 [2024-07-12 12:31:54.882397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.967 [2024-07-12 12:31:55.002467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.226 [2024-07-12 12:31:55.057932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:29.483  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:29.483 00:07:29.483 12:31:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:07:29.483 12:31:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:07:29.483 00:07:29.483 real 0m3.401s 00:07:29.483 user 0m2.521s 00:07:29.483 sys 0m0.967s 00:07:29.483 12:31:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.483 12:31:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:07:29.483 ************************************ 00:07:29.483 END TEST dd_offset_magic 00:07:29.483 ************************************ 00:07:29.483 12:31:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:07:29.483 12:31:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:07:29.483 12:31:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:07:29.483 12:31:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:29.484 12:31:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:29.484 12:31:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:29.484 12:31:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:29.484 12:31:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:29.484 12:31:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:07:29.484 12:31:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:29.484 12:31:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:29.484 12:31:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:29.741 [2024-07-12 12:31:55.566140] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:29.741 [2024-07-12 12:31:55.566255] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64154 ] 00:07:29.741 { 00:07:29.741 "subsystems": [ 00:07:29.741 { 00:07:29.741 "subsystem": "bdev", 00:07:29.741 "config": [ 00:07:29.741 { 00:07:29.741 "params": { 00:07:29.741 "trtype": "pcie", 00:07:29.741 "traddr": "0000:00:10.0", 00:07:29.741 "name": "Nvme0" 00:07:29.741 }, 00:07:29.741 "method": "bdev_nvme_attach_controller" 00:07:29.741 }, 00:07:29.741 { 00:07:29.741 "params": { 00:07:29.741 "trtype": "pcie", 00:07:29.741 "traddr": "0000:00:11.0", 00:07:29.741 "name": "Nvme1" 00:07:29.741 }, 00:07:29.741 "method": "bdev_nvme_attach_controller" 00:07:29.741 }, 00:07:29.741 { 00:07:29.741 "method": "bdev_wait_for_examine" 00:07:29.741 } 00:07:29.741 ] 00:07:29.741 } 00:07:29.741 ] 00:07:29.741 } 00:07:29.741 [2024-07-12 12:31:55.698195] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.998 [2024-07-12 12:31:55.820145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.998 [2024-07-12 12:31:55.877491] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:30.300  Copying: 5120/5120 [kB] (average 1000 MBps) 00:07:30.300 00:07:30.300 12:31:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:07:30.300 12:31:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:07:30.300 12:31:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:07:30.300 12:31:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:07:30.300 12:31:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:07:30.300 12:31:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:07:30.300 12:31:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:07:30.300 12:31:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:07:30.300 12:31:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:30.300 12:31:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:30.300 { 00:07:30.300 "subsystems": [ 00:07:30.300 { 00:07:30.300 "subsystem": "bdev", 00:07:30.300 "config": [ 00:07:30.300 { 00:07:30.300 "params": { 00:07:30.300 "trtype": "pcie", 00:07:30.300 "traddr": "0000:00:10.0", 00:07:30.300 "name": "Nvme0" 00:07:30.300 }, 00:07:30.300 "method": "bdev_nvme_attach_controller" 00:07:30.300 }, 00:07:30.300 { 00:07:30.300 "params": { 00:07:30.300 "trtype": "pcie", 00:07:30.300 "traddr": "0000:00:11.0", 00:07:30.300 "name": "Nvme1" 00:07:30.300 }, 00:07:30.300 "method": "bdev_nvme_attach_controller" 00:07:30.300 }, 00:07:30.300 { 00:07:30.300 "method": "bdev_wait_for_examine" 00:07:30.300 } 00:07:30.300 ] 00:07:30.300 } 00:07:30.300 ] 00:07:30.300 } 00:07:30.300 [2024-07-12 12:31:56.366863] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:30.300 [2024-07-12 12:31:56.367024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64170 ] 00:07:30.558 [2024-07-12 12:31:56.513495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.816 [2024-07-12 12:31:56.632862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.816 [2024-07-12 12:31:56.687325] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:31.074  Copying: 5120/5120 [kB] (average 833 MBps) 00:07:31.074 00:07:31.074 12:31:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:07:31.074 00:07:31.074 real 0m7.885s 00:07:31.074 user 0m5.882s 00:07:31.074 sys 0m3.522s 00:07:31.074 12:31:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.074 12:31:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:31.074 ************************************ 00:07:31.074 END TEST spdk_dd_bdev_to_bdev 00:07:31.074 ************************************ 00:07:31.332 12:31:57 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:31.332 12:31:57 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:07:31.332 12:31:57 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:31.332 12:31:57 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.332 12:31:57 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.332 12:31:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:31.332 ************************************ 00:07:31.332 START TEST spdk_dd_uring 00:07:31.332 ************************************ 00:07:31.332 12:31:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:07:31.332 * Looking for test storage... 00:07:31.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:31.332 12:31:57 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:31.332 12:31:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.332 12:31:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.332 12:31:57 spdk_dd.spdk_dd_uring -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.332 12:31:57 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.332 12:31:57 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.332 12:31:57 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.332 12:31:57 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:07:31.332 12:31:57 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.332 12:31:57 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:07:31.332 12:31:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:31.332 12:31:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.332 12:31:57 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:31.332 ************************************ 00:07:31.332 START TEST dd_uring_copy 00:07:31.332 ************************************ 00:07:31.332 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1123 -- # uring_zram_copy 00:07:31.332 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # return 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@181 -- # local id=1 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # local size=512M 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@186 -- # echo 512M 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=04uzrmt7szjkk1ytiuban7l6sxthr1xc95ejpcifkvojyrqkwbt04jsk1yykybykjx9tcop7a1jkzjpati9o90i3rs4kwtuwz6z2um4b0764jngso22spt2lhsq4od7ucxyiciemvjt89ia97q0p7og1kb12rofkrtv34gt44nwjaksd6iwuxlj07o0tmdkyo3g32k1s5j9536gdqmbjbqxcysia4uo3kestnryuth45mbxhgxqyquimo17hrvb8vyz28l24non3mwvcp9159hphzqja36krv3qfi7wmgubq5n54vp09iy9r4gwl23fj8w7f3hsnqhogjoky2q2n8ls8b61njt72cq8egu9m8yifzgrlnp0tjbv60glqbbcki9ln7iu0p1ay4yqslkvlbq98tpzch35anti71relsctg2dvey3c3lenlol3w21u9uqp5t1c7xwdb04l7ham83mf2avmt52hpbzxb1t91pid3qppn0srjxrh73k4pvg1y84acys7ulpvjq7abce5f3ab1qxxg9a5s4egcpyor47wz40gyuuioq9apga28ygjrnziqeombgnwmczip2ywhx423p65lpl3gd6oc25sps46bciniv40uhtj6s2qj2edy7cm3m6rmygyihrz83thu67otbgn1r71xmtr31gumqoenjidt5b3kdz2mt0kxt667wzseynafi5h1wdgi9y22b6ku2e9iuajk9tclr7r4q1hnhnzqsq77qww5btavsv983mqppn9w25k3njc6n1p0uccm334aszdxsru0dnrvndbvfaybjhvn1dsbpr4o5umafkdelzxju6yafoyu5jienptmzc3wblzu0rmcg43j8vtgcd1muxgqml35kjhg901ldh54f3d80jd9uf29dv16auy9dgs7m2u195tv9svoufl0wnw23nc72sk0u2yx4ozypl9iyg7foc5cz09vwjfwnh4r3o9bzici5mi9yhdkoh2o61hs9cz3dvaqwy6aedca 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 04uzrmt7szjkk1ytiuban7l6sxthr1xc95ejpcifkvojyrqkwbt04jsk1yykybykjx9tcop7a1jkzjpati9o90i3rs4kwtuwz6z2um4b0764jngso22spt2lhsq4od7ucxyiciemvjt89ia97q0p7og1kb12rofkrtv34gt44nwjaksd6iwuxlj07o0tmdkyo3g32k1s5j9536gdqmbjbqxcysia4uo3kestnryuth45mbxhgxqyquimo17hrvb8vyz28l24non3mwvcp9159hphzqja36krv3qfi7wmgubq5n54vp09iy9r4gwl23fj8w7f3hsnqhogjoky2q2n8ls8b61njt72cq8egu9m8yifzgrlnp0tjbv60glqbbcki9ln7iu0p1ay4yqslkvlbq98tpzch35anti71relsctg2dvey3c3lenlol3w21u9uqp5t1c7xwdb04l7ham83mf2avmt52hpbzxb1t91pid3qppn0srjxrh73k4pvg1y84acys7ulpvjq7abce5f3ab1qxxg9a5s4egcpyor47wz40gyuuioq9apga28ygjrnziqeombgnwmczip2ywhx423p65lpl3gd6oc25sps46bciniv40uhtj6s2qj2edy7cm3m6rmygyihrz83thu67otbgn1r71xmtr31gumqoenjidt5b3kdz2mt0kxt667wzseynafi5h1wdgi9y22b6ku2e9iuajk9tclr7r4q1hnhnzqsq77qww5btavsv983mqppn9w25k3njc6n1p0uccm334aszdxsru0dnrvndbvfaybjhvn1dsbpr4o5umafkdelzxju6yafoyu5jienptmzc3wblzu0rmcg43j8vtgcd1muxgqml35kjhg901ldh54f3d80jd9uf29dv16auy9dgs7m2u195tv9svoufl0wnw23nc72sk0u2yx4ozypl9iyg7foc5cz09vwjfwnh4r3o9bzici5mi9yhdkoh2o61hs9cz3dvaqwy6aedca 00:07:31.333 12:31:57 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:07:31.333 [2024-07-12 12:31:57.356746] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:31.333 [2024-07-12 12:31:57.357120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64240 ] 00:07:31.591 [2024-07-12 12:31:57.492121] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.591 [2024-07-12 12:31:57.615502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.849 [2024-07-12 12:31:57.676269] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:32.981  Copying: 511/511 [MB] (average 1021 MBps) 00:07:32.981 00:07:32.981 12:31:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:07:32.981 12:31:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:07:32.981 12:31:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:32.981 12:31:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:32.981 [2024-07-12 12:31:58.888186] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:32.981 [2024-07-12 12:31:58.888284] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64261 ] 00:07:32.981 { 00:07:32.981 "subsystems": [ 00:07:32.981 { 00:07:32.981 "subsystem": "bdev", 00:07:32.981 "config": [ 00:07:32.981 { 00:07:32.981 "params": { 00:07:32.981 "block_size": 512, 00:07:32.981 "num_blocks": 1048576, 00:07:32.981 "name": "malloc0" 00:07:32.981 }, 00:07:32.981 "method": "bdev_malloc_create" 00:07:32.981 }, 00:07:32.981 { 00:07:32.981 "params": { 00:07:32.981 "filename": "/dev/zram1", 00:07:32.981 "name": "uring0" 00:07:32.981 }, 00:07:32.981 "method": "bdev_uring_create" 00:07:32.981 }, 00:07:32.981 { 00:07:32.981 "method": "bdev_wait_for_examine" 00:07:32.981 } 00:07:32.981 ] 00:07:32.981 } 00:07:32.981 ] 00:07:32.981 } 00:07:32.981 [2024-07-12 12:31:59.026392] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.239 [2024-07-12 12:31:59.158722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.239 [2024-07-12 12:31:59.217582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:36.383  Copying: 211/512 [MB] (211 MBps) Copying: 429/512 [MB] (217 MBps) Copying: 512/512 [MB] (average 215 MBps) 00:07:36.383 00:07:36.384 12:32:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:07:36.384 12:32:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:07:36.384 12:32:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:36.384 12:32:02 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:36.384 [2024-07-12 12:32:02.260460] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:36.384 [2024-07-12 12:32:02.260553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64305 ] 00:07:36.384 { 00:07:36.384 "subsystems": [ 00:07:36.384 { 00:07:36.384 "subsystem": "bdev", 00:07:36.384 "config": [ 00:07:36.384 { 00:07:36.384 "params": { 00:07:36.384 "block_size": 512, 00:07:36.384 "num_blocks": 1048576, 00:07:36.384 "name": "malloc0" 00:07:36.384 }, 00:07:36.384 "method": "bdev_malloc_create" 00:07:36.384 }, 00:07:36.384 { 00:07:36.384 "params": { 00:07:36.384 "filename": "/dev/zram1", 00:07:36.384 "name": "uring0" 00:07:36.384 }, 00:07:36.384 "method": "bdev_uring_create" 00:07:36.384 }, 00:07:36.384 { 00:07:36.384 "method": "bdev_wait_for_examine" 00:07:36.384 } 00:07:36.384 ] 00:07:36.384 } 00:07:36.384 ] 00:07:36.384 } 00:07:36.384 [2024-07-12 12:32:02.399248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.641 [2024-07-12 12:32:02.499049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.641 [2024-07-12 12:32:02.553655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:40.140  Copying: 185/512 [MB] (185 MBps) Copying: 355/512 [MB] (169 MBps) Copying: 511/512 [MB] (156 MBps) Copying: 512/512 [MB] (average 170 MBps) 00:07:40.140 00:07:40.140 12:32:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:07:40.140 12:32:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 04uzrmt7szjkk1ytiuban7l6sxthr1xc95ejpcifkvojyrqkwbt04jsk1yykybykjx9tcop7a1jkzjpati9o90i3rs4kwtuwz6z2um4b0764jngso22spt2lhsq4od7ucxyiciemvjt89ia97q0p7og1kb12rofkrtv34gt44nwjaksd6iwuxlj07o0tmdkyo3g32k1s5j9536gdqmbjbqxcysia4uo3kestnryuth45mbxhgxqyquimo17hrvb8vyz28l24non3mwvcp9159hphzqja36krv3qfi7wmgubq5n54vp09iy9r4gwl23fj8w7f3hsnqhogjoky2q2n8ls8b61njt72cq8egu9m8yifzgrlnp0tjbv60glqbbcki9ln7iu0p1ay4yqslkvlbq98tpzch35anti71relsctg2dvey3c3lenlol3w21u9uqp5t1c7xwdb04l7ham83mf2avmt52hpbzxb1t91pid3qppn0srjxrh73k4pvg1y84acys7ulpvjq7abce5f3ab1qxxg9a5s4egcpyor47wz40gyuuioq9apga28ygjrnziqeombgnwmczip2ywhx423p65lpl3gd6oc25sps46bciniv40uhtj6s2qj2edy7cm3m6rmygyihrz83thu67otbgn1r71xmtr31gumqoenjidt5b3kdz2mt0kxt667wzseynafi5h1wdgi9y22b6ku2e9iuajk9tclr7r4q1hnhnzqsq77qww5btavsv983mqppn9w25k3njc6n1p0uccm334aszdxsru0dnrvndbvfaybjhvn1dsbpr4o5umafkdelzxju6yafoyu5jienptmzc3wblzu0rmcg43j8vtgcd1muxgqml35kjhg901ldh54f3d80jd9uf29dv16auy9dgs7m2u195tv9svoufl0wnw23nc72sk0u2yx4ozypl9iyg7foc5cz09vwjfwnh4r3o9bzici5mi9yhdkoh2o61hs9cz3dvaqwy6aedca == \0\4\u\z\r\m\t\7\s\z\j\k\k\1\y\t\i\u\b\a\n\7\l\6\s\x\t\h\r\1\x\c\9\5\e\j\p\c\i\f\k\v\o\j\y\r\q\k\w\b\t\0\4\j\s\k\1\y\y\k\y\b\y\k\j\x\9\t\c\o\p\7\a\1\j\k\z\j\p\a\t\i\9\o\9\0\i\3\r\s\4\k\w\t\u\w\z\6\z\2\u\m\4\b\0\7\6\4\j\n\g\s\o\2\2\s\p\t\2\l\h\s\q\4\o\d\7\u\c\x\y\i\c\i\e\m\v\j\t\8\9\i\a\9\7\q\0\p\7\o\g\1\k\b\1\2\r\o\f\k\r\t\v\3\4\g\t\4\4\n\w\j\a\k\s\d\6\i\w\u\x\l\j\0\7\o\0\t\m\d\k\y\o\3\g\3\2\k\1\s\5\j\9\5\3\6\g\d\q\m\b\j\b\q\x\c\y\s\i\a\4\u\o\3\k\e\s\t\n\r\y\u\t\h\4\5\m\b\x\h\g\x\q\y\q\u\i\m\o\1\7\h\r\v\b\8\v\y\z\2\8\l\2\4\n\o\n\3\m\w\v\c\p\9\1\5\9\h\p\h\z\q\j\a\3\6\k\r\v\3\q\f\i\7\w\m\g\u\b\q\5\n\5\4\v\p\0\9\i\y\9\r\4\g\w\l\2\3\f\j\8\w\7\f\3\h\s\n\q\h\o\g\j\o\k\y\2\q\2\n\8\l\s\8\b\6\1\n\j\t\7\2\c\q\8\e\g\u\9\m\8\y\i\f\z\g\r\l\n\p\0\t\j\b\v\6\0\g\l\q\b\b\c\k\i\9\l\n\7\i\u\0\p\1\a\y\4\y\q\s\l\k\v\l\b\q\9\8\t\p\z\c\h\3\5\a\n\t\i\7\1\r\e\l\s\c\t\g\2\d\v\e\y\3\c\3\l\e\n\l\o\l\3\w\2\1\u\9\u\q\p\5\t\1\c\7\x\w\d\b\0\4\l\7\h\a\m\8\3\m\f\2\a\v\m\t\5\2\h\p\b\z\x\b\1\t\9\1\p\i\d\3\q\p\p\n\0\s\r\j\x\r\h\7\3\k\4\p\v\g\1\y\8\4\a\c\y\s\7\u\l\p\v\j\q\7\a\b\c\e\5\f\3\a\b\1\q\x\x\g\9\a\5\s\4\e\g\c\p\y\o\r\4\7\w\z\4\0\g\y\u\u\i\o\q\9\a\p\g\a\2\8\y\g\j\r\n\z\i\q\e\o\m\b\g\n\w\m\c\z\i\p\2\y\w\h\x\4\2\3\p\6\5\l\p\l\3\g\d\6\o\c\2\5\s\p\s\4\6\b\c\i\n\i\v\4\0\u\h\t\j\6\s\2\q\j\2\e\d\y\7\c\m\3\m\6\r\m\y\g\y\i\h\r\z\8\3\t\h\u\6\7\o\t\b\g\n\1\r\7\1\x\m\t\r\3\1\g\u\m\q\o\e\n\j\i\d\t\5\b\3\k\d\z\2\m\t\0\k\x\t\6\6\7\w\z\s\e\y\n\a\f\i\5\h\1\w\d\g\i\9\y\2\2\b\6\k\u\2\e\9\i\u\a\j\k\9\t\c\l\r\7\r\4\q\1\h\n\h\n\z\q\s\q\7\7\q\w\w\5\b\t\a\v\s\v\9\8\3\m\q\p\p\n\9\w\2\5\k\3\n\j\c\6\n\1\p\0\u\c\c\m\3\3\4\a\s\z\d\x\s\r\u\0\d\n\r\v\n\d\b\v\f\a\y\b\j\h\v\n\1\d\s\b\p\r\4\o\5\u\m\a\f\k\d\e\l\z\x\j\u\6\y\a\f\o\y\u\5\j\i\e\n\p\t\m\z\c\3\w\b\l\z\u\0\r\m\c\g\4\3\j\8\v\t\g\c\d\1\m\u\x\g\q\m\l\3\5\k\j\h\g\9\0\1\l\d\h\5\4\f\3\d\8\0\j\d\9\u\f\2\9\d\v\1\6\a\u\y\9\d\g\s\7\m\2\u\1\9\5\t\v\9\s\v\o\u\f\l\0\w\n\w\2\3\n\c\7\2\s\k\0\u\2\y\x\4\o\z\y\p\l\9\i\y\g\7\f\o\c\5\c\z\0\9\v\w\j\f\w\n\h\4\r\3\o\9\b\z\i\c\i\5\m\i\9\y\h\d\k\o\h\2\o\6\1\h\s\9\c\z\3\d\v\a\q\w\y\6\a\e\d\c\a ]] 00:07:40.140 12:32:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:07:40.140 12:32:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 04uzrmt7szjkk1ytiuban7l6sxthr1xc95ejpcifkvojyrqkwbt04jsk1yykybykjx9tcop7a1jkzjpati9o90i3rs4kwtuwz6z2um4b0764jngso22spt2lhsq4od7ucxyiciemvjt89ia97q0p7og1kb12rofkrtv34gt44nwjaksd6iwuxlj07o0tmdkyo3g32k1s5j9536gdqmbjbqxcysia4uo3kestnryuth45mbxhgxqyquimo17hrvb8vyz28l24non3mwvcp9159hphzqja36krv3qfi7wmgubq5n54vp09iy9r4gwl23fj8w7f3hsnqhogjoky2q2n8ls8b61njt72cq8egu9m8yifzgrlnp0tjbv60glqbbcki9ln7iu0p1ay4yqslkvlbq98tpzch35anti71relsctg2dvey3c3lenlol3w21u9uqp5t1c7xwdb04l7ham83mf2avmt52hpbzxb1t91pid3qppn0srjxrh73k4pvg1y84acys7ulpvjq7abce5f3ab1qxxg9a5s4egcpyor47wz40gyuuioq9apga28ygjrnziqeombgnwmczip2ywhx423p65lpl3gd6oc25sps46bciniv40uhtj6s2qj2edy7cm3m6rmygyihrz83thu67otbgn1r71xmtr31gumqoenjidt5b3kdz2mt0kxt667wzseynafi5h1wdgi9y22b6ku2e9iuajk9tclr7r4q1hnhnzqsq77qww5btavsv983mqppn9w25k3njc6n1p0uccm334aszdxsru0dnrvndbvfaybjhvn1dsbpr4o5umafkdelzxju6yafoyu5jienptmzc3wblzu0rmcg43j8vtgcd1muxgqml35kjhg901ldh54f3d80jd9uf29dv16auy9dgs7m2u195tv9svoufl0wnw23nc72sk0u2yx4ozypl9iyg7foc5cz09vwjfwnh4r3o9bzici5mi9yhdkoh2o61hs9cz3dvaqwy6aedca == \0\4\u\z\r\m\t\7\s\z\j\k\k\1\y\t\i\u\b\a\n\7\l\6\s\x\t\h\r\1\x\c\9\5\e\j\p\c\i\f\k\v\o\j\y\r\q\k\w\b\t\0\4\j\s\k\1\y\y\k\y\b\y\k\j\x\9\t\c\o\p\7\a\1\j\k\z\j\p\a\t\i\9\o\9\0\i\3\r\s\4\k\w\t\u\w\z\6\z\2\u\m\4\b\0\7\6\4\j\n\g\s\o\2\2\s\p\t\2\l\h\s\q\4\o\d\7\u\c\x\y\i\c\i\e\m\v\j\t\8\9\i\a\9\7\q\0\p\7\o\g\1\k\b\1\2\r\o\f\k\r\t\v\3\4\g\t\4\4\n\w\j\a\k\s\d\6\i\w\u\x\l\j\0\7\o\0\t\m\d\k\y\o\3\g\3\2\k\1\s\5\j\9\5\3\6\g\d\q\m\b\j\b\q\x\c\y\s\i\a\4\u\o\3\k\e\s\t\n\r\y\u\t\h\4\5\m\b\x\h\g\x\q\y\q\u\i\m\o\1\7\h\r\v\b\8\v\y\z\2\8\l\2\4\n\o\n\3\m\w\v\c\p\9\1\5\9\h\p\h\z\q\j\a\3\6\k\r\v\3\q\f\i\7\w\m\g\u\b\q\5\n\5\4\v\p\0\9\i\y\9\r\4\g\w\l\2\3\f\j\8\w\7\f\3\h\s\n\q\h\o\g\j\o\k\y\2\q\2\n\8\l\s\8\b\6\1\n\j\t\7\2\c\q\8\e\g\u\9\m\8\y\i\f\z\g\r\l\n\p\0\t\j\b\v\6\0\g\l\q\b\b\c\k\i\9\l\n\7\i\u\0\p\1\a\y\4\y\q\s\l\k\v\l\b\q\9\8\t\p\z\c\h\3\5\a\n\t\i\7\1\r\e\l\s\c\t\g\2\d\v\e\y\3\c\3\l\e\n\l\o\l\3\w\2\1\u\9\u\q\p\5\t\1\c\7\x\w\d\b\0\4\l\7\h\a\m\8\3\m\f\2\a\v\m\t\5\2\h\p\b\z\x\b\1\t\9\1\p\i\d\3\q\p\p\n\0\s\r\j\x\r\h\7\3\k\4\p\v\g\1\y\8\4\a\c\y\s\7\u\l\p\v\j\q\7\a\b\c\e\5\f\3\a\b\1\q\x\x\g\9\a\5\s\4\e\g\c\p\y\o\r\4\7\w\z\4\0\g\y\u\u\i\o\q\9\a\p\g\a\2\8\y\g\j\r\n\z\i\q\e\o\m\b\g\n\w\m\c\z\i\p\2\y\w\h\x\4\2\3\p\6\5\l\p\l\3\g\d\6\o\c\2\5\s\p\s\4\6\b\c\i\n\i\v\4\0\u\h\t\j\6\s\2\q\j\2\e\d\y\7\c\m\3\m\6\r\m\y\g\y\i\h\r\z\8\3\t\h\u\6\7\o\t\b\g\n\1\r\7\1\x\m\t\r\3\1\g\u\m\q\o\e\n\j\i\d\t\5\b\3\k\d\z\2\m\t\0\k\x\t\6\6\7\w\z\s\e\y\n\a\f\i\5\h\1\w\d\g\i\9\y\2\2\b\6\k\u\2\e\9\i\u\a\j\k\9\t\c\l\r\7\r\4\q\1\h\n\h\n\z\q\s\q\7\7\q\w\w\5\b\t\a\v\s\v\9\8\3\m\q\p\p\n\9\w\2\5\k\3\n\j\c\6\n\1\p\0\u\c\c\m\3\3\4\a\s\z\d\x\s\r\u\0\d\n\r\v\n\d\b\v\f\a\y\b\j\h\v\n\1\d\s\b\p\r\4\o\5\u\m\a\f\k\d\e\l\z\x\j\u\6\y\a\f\o\y\u\5\j\i\e\n\p\t\m\z\c\3\w\b\l\z\u\0\r\m\c\g\4\3\j\8\v\t\g\c\d\1\m\u\x\g\q\m\l\3\5\k\j\h\g\9\0\1\l\d\h\5\4\f\3\d\8\0\j\d\9\u\f\2\9\d\v\1\6\a\u\y\9\d\g\s\7\m\2\u\1\9\5\t\v\9\s\v\o\u\f\l\0\w\n\w\2\3\n\c\7\2\s\k\0\u\2\y\x\4\o\z\y\p\l\9\i\y\g\7\f\o\c\5\c\z\0\9\v\w\j\f\w\n\h\4\r\3\o\9\b\z\i\c\i\5\m\i\9\y\h\d\k\o\h\2\o\6\1\h\s\9\c\z\3\d\v\a\q\w\y\6\a\e\d\c\a ]] 00:07:40.140 12:32:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:40.707 12:32:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:07:40.707 12:32:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:07:40.707 12:32:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:40.707 12:32:06 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:40.707 [2024-07-12 12:32:06.652526] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:40.707 [2024-07-12 12:32:06.652668] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64377 ] 00:07:40.707 { 00:07:40.707 "subsystems": [ 00:07:40.707 { 00:07:40.707 "subsystem": "bdev", 00:07:40.707 "config": [ 00:07:40.707 { 00:07:40.707 "params": { 00:07:40.707 "block_size": 512, 00:07:40.707 "num_blocks": 1048576, 00:07:40.707 "name": "malloc0" 00:07:40.707 }, 00:07:40.707 "method": "bdev_malloc_create" 00:07:40.707 }, 00:07:40.707 { 00:07:40.707 "params": { 00:07:40.707 "filename": "/dev/zram1", 00:07:40.707 "name": "uring0" 00:07:40.707 }, 00:07:40.707 "method": "bdev_uring_create" 00:07:40.707 }, 00:07:40.707 { 00:07:40.707 "method": "bdev_wait_for_examine" 00:07:40.707 } 00:07:40.707 ] 00:07:40.707 } 00:07:40.707 ] 00:07:40.707 } 00:07:40.966 [2024-07-12 12:32:06.791759] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.966 [2024-07-12 12:32:06.917614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.966 [2024-07-12 12:32:06.979707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:45.034  Copying: 143/512 [MB] (143 MBps) Copying: 297/512 [MB] (153 MBps) Copying: 456/512 [MB] (158 MBps) Copying: 512/512 [MB] (average 152 MBps) 00:07:45.034 00:07:45.034 12:32:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:07:45.035 12:32:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:07:45.035 12:32:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:45.035 12:32:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:07:45.035 12:32:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:07:45.035 12:32:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:07:45.035 12:32:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:45.035 12:32:10 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:45.035 [2024-07-12 12:32:11.004033] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:45.035 [2024-07-12 12:32:11.004141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64443 ] 00:07:45.035 { 00:07:45.035 "subsystems": [ 00:07:45.035 { 00:07:45.035 "subsystem": "bdev", 00:07:45.035 "config": [ 00:07:45.035 { 00:07:45.035 "params": { 00:07:45.035 "block_size": 512, 00:07:45.035 "num_blocks": 1048576, 00:07:45.035 "name": "malloc0" 00:07:45.035 }, 00:07:45.035 "method": "bdev_malloc_create" 00:07:45.035 }, 00:07:45.035 { 00:07:45.035 "params": { 00:07:45.035 "filename": "/dev/zram1", 00:07:45.035 "name": "uring0" 00:07:45.035 }, 00:07:45.035 "method": "bdev_uring_create" 00:07:45.035 }, 00:07:45.035 { 00:07:45.035 "params": { 00:07:45.035 "name": "uring0" 00:07:45.035 }, 00:07:45.035 "method": "bdev_uring_delete" 00:07:45.035 }, 00:07:45.035 { 00:07:45.035 "method": "bdev_wait_for_examine" 00:07:45.035 } 00:07:45.035 ] 00:07:45.035 } 00:07:45.035 ] 00:07:45.035 } 00:07:45.293 [2024-07-12 12:32:11.207357] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.293 [2024-07-12 12:32:11.290543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.293 [2024-07-12 12:32:11.344452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:46.164  Copying: 0/0 [B] (average 0 Bps) 00:07:46.164 00:07:46.165 12:32:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:07:46.165 12:32:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:46.165 12:32:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:07:46.165 12:32:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:46.165 12:32:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@648 -- # local es=0 00:07:46.165 12:32:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:46.165 12:32:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:46.165 12:32:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.165 12:32:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.165 12:32:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.165 12:32:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.165 12:32:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.165 12:32:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.165 12:32:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:46.165 12:32:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:46.165 12:32:11 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:07:46.165 [2024-07-12 12:32:12.011989] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:46.165 [2024-07-12 12:32:12.012132] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64472 ] 00:07:46.165 { 00:07:46.165 "subsystems": [ 00:07:46.165 { 00:07:46.165 "subsystem": "bdev", 00:07:46.165 "config": [ 00:07:46.165 { 00:07:46.165 "params": { 00:07:46.165 "block_size": 512, 00:07:46.165 "num_blocks": 1048576, 00:07:46.165 "name": "malloc0" 00:07:46.165 }, 00:07:46.165 "method": "bdev_malloc_create" 00:07:46.165 }, 00:07:46.165 { 00:07:46.165 "params": { 00:07:46.165 "filename": "/dev/zram1", 00:07:46.165 "name": "uring0" 00:07:46.165 }, 00:07:46.165 "method": "bdev_uring_create" 00:07:46.165 }, 00:07:46.165 { 00:07:46.165 "params": { 00:07:46.165 "name": "uring0" 00:07:46.165 }, 00:07:46.165 "method": "bdev_uring_delete" 00:07:46.165 }, 00:07:46.165 { 00:07:46.165 "method": "bdev_wait_for_examine" 00:07:46.165 } 00:07:46.165 ] 00:07:46.165 } 00:07:46.165 ] 00:07:46.165 } 00:07:46.165 [2024-07-12 12:32:12.147885] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.423 [2024-07-12 12:32:12.249878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.423 [2024-07-12 12:32:12.303318] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:46.681 [2024-07-12 12:32:12.505156] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:07:46.681 [2024-07-12 12:32:12.505218] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:07:46.681 [2024-07-12 12:32:12.505245] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:07:46.681 [2024-07-12 12:32:12.505255] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:46.939 [2024-07-12 12:32:12.822163] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:46.939 12:32:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@651 -- # es=237 00:07:46.939 12:32:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:46.940 12:32:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@660 -- # es=109 00:07:46.940 12:32:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # case "$es" in 00:07:46.940 12:32:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@668 -- # es=1 00:07:46.940 12:32:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:46.940 12:32:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:07:46.940 12:32:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # local id=1 00:07:46.940 12:32:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:07:46.940 12:32:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@176 -- # echo 1 00:07:46.940 12:32:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # echo 1 00:07:46.940 12:32:12 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:07:47.198 00:07:47.198 real 0m15.929s 00:07:47.198 user 0m10.889s 00:07:47.198 sys 0m12.761s 00:07:47.198 12:32:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.198 12:32:13 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:07:47.198 ************************************ 00:07:47.198 END TEST dd_uring_copy 00:07:47.198 ************************************ 00:07:47.198 12:32:13 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1142 -- # return 0 00:07:47.198 ************************************ 00:07:47.198 END TEST spdk_dd_uring 00:07:47.198 ************************************ 00:07:47.198 00:07:47.198 real 0m16.071s 00:07:47.198 user 0m10.939s 00:07:47.198 sys 0m12.855s 00:07:47.198 12:32:13 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.198 12:32:13 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:07:47.457 12:32:13 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:47.457 12:32:13 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:47.457 12:32:13 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.457 12:32:13 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.457 12:32:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:47.457 ************************************ 00:07:47.457 START TEST spdk_dd_sparse 00:07:47.457 ************************************ 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:07:47.457 * Looking for test storage... 00:07:47.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:07:47.457 1+0 records in 00:07:47.457 1+0 records out 00:07:47.457 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00696168 s, 602 MB/s 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:07:47.457 1+0 records in 00:07:47.457 1+0 records out 00:07:47.457 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00514778 s, 815 MB/s 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:07:47.457 1+0 records in 00:07:47.457 1+0 records out 00:07:47.457 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00744048 s, 564 MB/s 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:47.457 ************************************ 00:07:47.457 START TEST dd_sparse_file_to_file 00:07:47.457 ************************************ 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:47.457 12:32:13 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:47.457 [2024-07-12 12:32:13.487515] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:47.457 [2024-07-12 12:32:13.487601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64558 ] 00:07:47.457 { 00:07:47.457 "subsystems": [ 00:07:47.457 { 00:07:47.457 "subsystem": "bdev", 00:07:47.457 "config": [ 00:07:47.457 { 00:07:47.457 "params": { 00:07:47.457 "block_size": 4096, 00:07:47.457 "filename": "dd_sparse_aio_disk", 00:07:47.457 "name": "dd_aio" 00:07:47.457 }, 00:07:47.457 "method": "bdev_aio_create" 00:07:47.457 }, 00:07:47.457 { 00:07:47.457 "params": { 00:07:47.457 "lvs_name": "dd_lvstore", 00:07:47.457 "bdev_name": "dd_aio" 00:07:47.457 }, 00:07:47.457 "method": "bdev_lvol_create_lvstore" 00:07:47.457 }, 00:07:47.457 { 00:07:47.457 "method": "bdev_wait_for_examine" 00:07:47.457 } 00:07:47.457 ] 00:07:47.457 } 00:07:47.457 ] 00:07:47.457 } 00:07:47.715 [2024-07-12 12:32:13.624192] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.715 [2024-07-12 12:32:13.720896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.715 [2024-07-12 12:32:13.775070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:48.230  Copying: 12/36 [MB] (average 800 MBps) 00:07:48.230 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:48.230 ************************************ 00:07:48.230 END TEST dd_sparse_file_to_file 00:07:48.230 ************************************ 00:07:48.230 00:07:48.230 real 0m0.703s 00:07:48.230 user 0m0.448s 00:07:48.230 sys 0m0.370s 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:48.230 ************************************ 00:07:48.230 START TEST dd_sparse_file_to_bdev 00:07:48.230 ************************************ 00:07:48.230 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:07:48.231 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:48.231 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:07:48.231 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:07:48.231 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:07:48.231 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:07:48.231 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:07:48.231 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:07:48.231 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:48.231 [2024-07-12 12:32:14.251219] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:48.231 [2024-07-12 12:32:14.251352] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64607 ] 00:07:48.231 { 00:07:48.231 "subsystems": [ 00:07:48.231 { 00:07:48.231 "subsystem": "bdev", 00:07:48.231 "config": [ 00:07:48.231 { 00:07:48.231 "params": { 00:07:48.231 "block_size": 4096, 00:07:48.231 "filename": "dd_sparse_aio_disk", 00:07:48.231 "name": "dd_aio" 00:07:48.231 }, 00:07:48.231 "method": "bdev_aio_create" 00:07:48.231 }, 00:07:48.231 { 00:07:48.231 "params": { 00:07:48.231 "lvs_name": "dd_lvstore", 00:07:48.231 "lvol_name": "dd_lvol", 00:07:48.231 "size_in_mib": 36, 00:07:48.231 "thin_provision": true 00:07:48.231 }, 00:07:48.231 "method": "bdev_lvol_create" 00:07:48.231 }, 00:07:48.231 { 00:07:48.231 "method": "bdev_wait_for_examine" 00:07:48.231 } 00:07:48.231 ] 00:07:48.231 } 00:07:48.231 ] 00:07:48.231 } 00:07:48.489 [2024-07-12 12:32:14.392218] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.489 [2024-07-12 12:32:14.483469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.489 [2024-07-12 12:32:14.540094] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.016  Copying: 12/36 [MB] (average 500 MBps) 00:07:49.016 00:07:49.016 00:07:49.016 real 0m0.709s 00:07:49.016 user 0m0.469s 00:07:49.016 sys 0m0.351s 00:07:49.016 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.016 ************************************ 00:07:49.016 END TEST dd_sparse_file_to_bdev 00:07:49.016 ************************************ 00:07:49.016 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:07:49.016 12:32:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:49.016 12:32:14 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:07:49.016 12:32:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:49.016 12:32:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.016 12:32:14 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:49.016 ************************************ 00:07:49.016 START TEST dd_sparse_bdev_to_file 00:07:49.016 ************************************ 00:07:49.016 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:07:49.016 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:07:49.016 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:07:49.016 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:07:49.016 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:07:49.016 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:07:49.016 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:07:49.016 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:07:49.016 12:32:14 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:49.016 [2024-07-12 12:32:15.012428] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:49.016 [2024-07-12 12:32:15.012528] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64645 ] 00:07:49.016 { 00:07:49.016 "subsystems": [ 00:07:49.016 { 00:07:49.016 "subsystem": "bdev", 00:07:49.016 "config": [ 00:07:49.016 { 00:07:49.016 "params": { 00:07:49.016 "block_size": 4096, 00:07:49.016 "filename": "dd_sparse_aio_disk", 00:07:49.016 "name": "dd_aio" 00:07:49.016 }, 00:07:49.017 "method": "bdev_aio_create" 00:07:49.017 }, 00:07:49.017 { 00:07:49.017 "method": "bdev_wait_for_examine" 00:07:49.017 } 00:07:49.017 ] 00:07:49.017 } 00:07:49.017 ] 00:07:49.017 } 00:07:49.274 [2024-07-12 12:32:15.149401] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.274 [2024-07-12 12:32:15.245718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.274 [2024-07-12 12:32:15.298871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:49.792  Copying: 12/36 [MB] (average 923 MBps) 00:07:49.792 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:07:49.792 00:07:49.792 real 0m0.704s 00:07:49.792 user 0m0.449s 00:07:49.792 sys 0m0.359s 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.792 ************************************ 00:07:49.792 END TEST dd_sparse_bdev_to_file 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:07:49.792 ************************************ 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:07:49.792 00:07:49.792 real 0m2.426s 00:07:49.792 user 0m1.461s 00:07:49.792 sys 0m1.281s 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.792 ************************************ 00:07:49.792 END TEST spdk_dd_sparse 00:07:49.792 ************************************ 00:07:49.792 12:32:15 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:07:49.792 12:32:15 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:49.792 12:32:15 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:49.792 12:32:15 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:49.792 12:32:15 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.792 12:32:15 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:49.792 ************************************ 00:07:49.792 START TEST spdk_dd_negative 00:07:49.793 ************************************ 00:07:49.793 12:32:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:07:49.793 * Looking for test storage... 00:07:49.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:49.793 12:32:15 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.793 12:32:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.793 12:32:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.793 12:32:15 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.793 12:32:15 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.793 12:32:15 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:50.054 ************************************ 00:07:50.054 START TEST dd_invalid_arguments 00:07:50.054 ************************************ 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.054 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:07:50.054 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:07:50.054 00:07:50.054 CPU options: 00:07:50.054 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:07:50.054 (like [0,1,10]) 00:07:50.054 --lcores lcore to CPU mapping list. The list is in the format: 00:07:50.054 [<,lcores[@CPUs]>...] 00:07:50.054 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:07:50.054 Within the group, '-' is used for range separator, 00:07:50.054 ',' is used for single number separator. 00:07:50.054 '( )' can be omitted for single element group, 00:07:50.054 '@' can be omitted if cpus and lcores have the same value 00:07:50.054 --disable-cpumask-locks Disable CPU core lock files. 00:07:50.054 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:07:50.054 pollers in the app support interrupt mode) 00:07:50.054 -p, --main-core main (primary) core for DPDK 00:07:50.054 00:07:50.054 Configuration options: 00:07:50.054 -c, --config, --json JSON config file 00:07:50.054 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:07:50.054 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:07:50.054 --wait-for-rpc wait for RPCs to initialize subsystems 00:07:50.054 --rpcs-allowed comma-separated list of permitted RPCS 00:07:50.054 --json-ignore-init-errors don't exit on invalid config entry 00:07:50.054 00:07:50.054 Memory options: 00:07:50.054 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:07:50.054 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:07:50.054 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:07:50.054 -R, --huge-unlink unlink huge files after initialization 00:07:50.054 -n, --mem-channels number of memory channels used for DPDK 00:07:50.054 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:07:50.054 --msg-mempool-size global message memory pool size in count (default: 262143) 00:07:50.054 --no-huge run without using hugepages 00:07:50.054 -i, --shm-id shared memory ID (optional) 00:07:50.054 -g, --single-file-segments force creating just one hugetlbfs file 00:07:50.054 00:07:50.054 PCI options: 00:07:50.054 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:07:50.054 -B, --pci-blocked pci addr to block (can be used more than once) 00:07:50.054 -u, --no-pci disable PCI access 00:07:50.054 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:07:50.054 00:07:50.054 Log options: 00:07:50.054 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:07:50.054 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:07:50.054 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:07:50.054 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:07:50.054 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:07:50.054 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:07:50.054 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:07:50.054 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:07:50.054 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:07:50.054 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:07:50.054 virtio_vfio_user, vmd) 00:07:50.054 --silence-noticelog disable notice level logging to stderr 00:07:50.054 00:07:50.054 Trace options: 00:07:50.054 --num-trace-entries number of trace entries for each core, must be power of 2, 00:07:50.054 setting 0 to disable trace (default 32768) 00:07:50.054 Tracepoints vary in size and can use more than one trace entry. 00:07:50.054 -e, --tpoint-group [:] 00:07:50.054 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:07:50.054 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:07:50.054 [2024-07-12 12:32:15.930300] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:07:50.054 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:07:50.055 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:07:50.055 a tracepoint group. First tpoint inside a group can be enabled by 00:07:50.055 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:07:50.055 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:07:50.055 in /include/spdk_internal/trace_defs.h 00:07:50.055 00:07:50.055 Other options: 00:07:50.055 -h, --help show this usage 00:07:50.055 -v, --version print SPDK version 00:07:50.055 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:07:50.055 --env-context Opaque context for use of the env implementation 00:07:50.055 00:07:50.055 Application specific: 00:07:50.055 [--------- DD Options ---------] 00:07:50.055 --if Input file. Must specify either --if or --ib. 00:07:50.055 --ib Input bdev. Must specifier either --if or --ib 00:07:50.055 --of Output file. Must specify either --of or --ob. 00:07:50.055 --ob Output bdev. Must specify either --of or --ob. 00:07:50.055 --iflag Input file flags. 00:07:50.055 --oflag Output file flags. 00:07:50.055 --bs I/O unit size (default: 4096) 00:07:50.055 --qd Queue depth (default: 2) 00:07:50.055 --count I/O unit count. The number of I/O units to copy. (default: all) 00:07:50.055 --skip Skip this many I/O units at start of input. (default: 0) 00:07:50.055 --seek Skip this many I/O units at start of output. (default: 0) 00:07:50.055 --aio Force usage of AIO. (by default io_uring is used if available) 00:07:50.055 --sparse Enable hole skipping in input target 00:07:50.055 Available iflag and oflag values: 00:07:50.055 append - append mode 00:07:50.055 direct - use direct I/O for data 00:07:50.055 directory - fail unless a directory 00:07:50.055 dsync - use synchronized I/O for data 00:07:50.055 noatime - do not update access time 00:07:50.055 noctty - do not assign controlling terminal from file 00:07:50.055 nofollow - do not follow symlinks 00:07:50.055 nonblock - use non-blocking I/O 00:07:50.055 sync - use synchronized I/O for data and metadata 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.055 00:07:50.055 real 0m0.061s 00:07:50.055 user 0m0.037s 00:07:50.055 sys 0m0.022s 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:07:50.055 ************************************ 00:07:50.055 END TEST dd_invalid_arguments 00:07:50.055 ************************************ 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:50.055 ************************************ 00:07:50.055 START TEST dd_double_input 00:07:50.055 ************************************ 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.055 12:32:15 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:07:50.055 [2024-07-12 12:32:16.053210] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.055 00:07:50.055 real 0m0.073s 00:07:50.055 user 0m0.052s 00:07:50.055 sys 0m0.020s 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:07:50.055 ************************************ 00:07:50.055 END TEST dd_double_input 00:07:50.055 ************************************ 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:50.055 ************************************ 00:07:50.055 START TEST dd_double_output 00:07:50.055 ************************************ 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:50.055 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:07:50.313 [2024-07-12 12:32:16.182795] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.313 00:07:50.313 real 0m0.074s 00:07:50.313 user 0m0.051s 00:07:50.313 sys 0m0.021s 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:07:50.313 ************************************ 00:07:50.313 END TEST dd_double_output 00:07:50.313 ************************************ 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:50.313 ************************************ 00:07:50.313 START TEST dd_no_input 00:07:50.313 ************************************ 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:07:50.313 [2024-07-12 12:32:16.304779] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.313 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.313 00:07:50.313 real 0m0.065s 00:07:50.313 user 0m0.036s 00:07:50.313 sys 0m0.028s 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:07:50.314 ************************************ 00:07:50.314 END TEST dd_no_input 00:07:50.314 ************************************ 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:50.314 ************************************ 00:07:50.314 START TEST dd_no_output 00:07:50.314 ************************************ 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.314 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.572 [2024-07-12 12:32:16.420265] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.572 00:07:50.572 real 0m0.069s 00:07:50.572 user 0m0.050s 00:07:50.572 sys 0m0.018s 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.572 ************************************ 00:07:50.572 END TEST dd_no_output 00:07:50.572 ************************************ 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:50.572 ************************************ 00:07:50.572 START TEST dd_wrong_blocksize 00:07:50.572 ************************************ 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:07:50.572 [2024-07-12 12:32:16.529738] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:50.572 00:07:50.572 real 0m0.057s 00:07:50.572 user 0m0.039s 00:07:50.572 sys 0m0.017s 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.572 ************************************ 00:07:50.572 END TEST dd_wrong_blocksize 00:07:50.572 ************************************ 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:50.572 ************************************ 00:07:50.572 START TEST dd_smaller_blocksize 00:07:50.572 ************************************ 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.572 12:32:16 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:07:50.831 [2024-07-12 12:32:16.649267] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:50.831 [2024-07-12 12:32:16.649381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64864 ] 00:07:50.831 [2024-07-12 12:32:16.778103] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.831 [2024-07-12 12:32:16.864020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.090 [2024-07-12 12:32:16.916995] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:51.350 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:07:51.350 [2024-07-12 12:32:17.212166] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:07:51.350 [2024-07-12 12:32:17.212254] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.350 [2024-07-12 12:32:17.329057] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:51.609 00:07:51.609 real 0m0.832s 00:07:51.609 user 0m0.360s 00:07:51.609 sys 0m0.367s 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.609 ************************************ 00:07:51.609 END TEST dd_smaller_blocksize 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:07:51.609 ************************************ 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:51.609 ************************************ 00:07:51.609 START TEST dd_invalid_count 00:07:51.609 ************************************ 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:07:51.609 [2024-07-12 12:32:17.538834] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:51.609 00:07:51.609 real 0m0.074s 00:07:51.609 user 0m0.049s 00:07:51.609 sys 0m0.023s 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.609 ************************************ 00:07:51.609 END TEST dd_invalid_count 00:07:51.609 ************************************ 00:07:51.609 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:51.610 ************************************ 00:07:51.610 START TEST dd_invalid_oflag 00:07:51.610 ************************************ 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:07:51.610 [2024-07-12 12:32:17.658973] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:51.610 00:07:51.610 real 0m0.064s 00:07:51.610 user 0m0.042s 00:07:51.610 sys 0m0.021s 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.610 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:07:51.610 ************************************ 00:07:51.610 END TEST dd_invalid_oflag 00:07:51.610 ************************************ 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:51.868 ************************************ 00:07:51.868 START TEST dd_invalid_iflag 00:07:51.868 ************************************ 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:07:51.868 [2024-07-12 12:32:17.784870] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:51.868 00:07:51.868 real 0m0.077s 00:07:51.868 user 0m0.050s 00:07:51.868 sys 0m0.026s 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:07:51.868 ************************************ 00:07:51.868 END TEST dd_invalid_iflag 00:07:51.868 ************************************ 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:51.868 ************************************ 00:07:51.868 START TEST dd_unknown_flag 00:07:51.868 ************************************ 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.868 12:32:17 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:07:51.868 [2024-07-12 12:32:17.907714] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:51.868 [2024-07-12 12:32:17.907816] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64961 ] 00:07:52.126 [2024-07-12 12:32:18.044680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.126 [2024-07-12 12:32:18.136698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.126 [2024-07-12 12:32:18.187642] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:52.384 [2024-07-12 12:32:18.220129] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:52.384 [2024-07-12 12:32:18.220211] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.384 [2024-07-12 12:32:18.220289] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:07:52.384 [2024-07-12 12:32:18.220302] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.384 [2024-07-12 12:32:18.220638] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:07:52.384 [2024-07-12 12:32:18.220656] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.384 [2024-07-12 12:32:18.220705] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:52.384 [2024-07-12 12:32:18.220716] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:07:52.384 [2024-07-12 12:32:18.330729] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:52.384 12:32:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:07:52.384 12:32:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:52.384 12:32:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:07:52.384 12:32:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:07:52.384 12:32:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:07:52.384 12:32:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:52.384 00:07:52.384 real 0m0.594s 00:07:52.384 user 0m0.341s 00:07:52.384 sys 0m0.159s 00:07:52.384 12:32:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.384 ************************************ 00:07:52.384 END TEST dd_unknown_flag 00:07:52.384 ************************************ 00:07:52.384 12:32:18 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:52.643 ************************************ 00:07:52.643 START TEST dd_invalid_json 00:07:52.643 ************************************ 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:52.643 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:07:52.643 [2024-07-12 12:32:18.556697] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:52.643 [2024-07-12 12:32:18.556838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64984 ] 00:07:52.643 [2024-07-12 12:32:18.691919] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.901 [2024-07-12 12:32:18.786226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.901 [2024-07-12 12:32:18.786316] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:07:52.901 [2024-07-12 12:32:18.786333] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:52.901 [2024-07-12 12:32:18.786342] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.901 [2024-07-12 12:32:18.786378] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:52.901 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:07:52.901 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:52.901 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:07:52.901 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:07:52.901 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:07:52.901 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:52.901 00:07:52.901 real 0m0.395s 00:07:52.901 user 0m0.227s 00:07:52.901 sys 0m0.066s 00:07:52.901 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.901 12:32:18 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:07:52.901 ************************************ 00:07:52.901 END TEST dd_invalid_json 00:07:52.901 ************************************ 00:07:52.901 12:32:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:07:52.901 ************************************ 00:07:52.901 END TEST spdk_dd_negative 00:07:52.901 ************************************ 00:07:52.901 00:07:52.901 real 0m3.165s 00:07:52.901 user 0m1.570s 00:07:52.901 sys 0m1.228s 00:07:52.901 12:32:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.901 12:32:18 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:07:53.159 12:32:18 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:07:53.159 00:07:53.159 real 1m21.531s 00:07:53.159 user 0m53.746s 00:07:53.159 sys 0m34.322s 00:07:53.159 12:32:18 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.159 12:32:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:53.159 ************************************ 00:07:53.159 END TEST spdk_dd 00:07:53.159 ************************************ 00:07:53.159 12:32:19 -- common/autotest_common.sh@1142 -- # return 0 00:07:53.159 12:32:19 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:53.159 12:32:19 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:53.159 12:32:19 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:53.159 12:32:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:53.159 12:32:19 -- common/autotest_common.sh@10 -- # set +x 00:07:53.159 12:32:19 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:53.159 12:32:19 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:53.159 12:32:19 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:53.159 12:32:19 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:53.159 12:32:19 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:53.159 12:32:19 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:53.159 12:32:19 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:53.159 12:32:19 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:53.159 12:32:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.159 12:32:19 -- common/autotest_common.sh@10 -- # set +x 00:07:53.159 ************************************ 00:07:53.159 START TEST nvmf_tcp 00:07:53.159 ************************************ 00:07:53.159 12:32:19 nvmf_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:53.159 * Looking for test storage... 00:07:53.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.159 12:32:19 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.159 12:32:19 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.159 12:32:19 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.159 12:32:19 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.160 12:32:19 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.160 12:32:19 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.160 12:32:19 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.160 12:32:19 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:53.160 12:32:19 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.160 12:32:19 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:53.160 12:32:19 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.160 12:32:19 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.160 12:32:19 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.160 12:32:19 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.160 12:32:19 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.160 12:32:19 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.160 12:32:19 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.160 12:32:19 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.160 12:32:19 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:53.160 12:32:19 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:53.160 12:32:19 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:53.160 12:32:19 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.160 12:32:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.160 12:32:19 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:07:53.160 12:32:19 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:53.160 12:32:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:53.160 12:32:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.160 12:32:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:53.160 ************************************ 00:07:53.160 START TEST nvmf_host_management 00:07:53.160 ************************************ 00:07:53.160 12:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:53.417 * Looking for test storage... 00:07:53.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:53.417 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:53.418 Cannot find device "nvmf_init_br" 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # true 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:53.418 Cannot find device "nvmf_tgt_br" 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:53.418 Cannot find device "nvmf_tgt_br2" 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:53.418 Cannot find device "nvmf_init_br" 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # true 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:53.418 Cannot find device "nvmf_tgt_br" 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:53.418 Cannot find device "nvmf_tgt_br2" 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:53.418 Cannot find device "nvmf_br" 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # true 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:53.418 Cannot find device "nvmf_init_if" 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # true 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:53.418 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:53.418 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:53.418 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:53.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:07:53.677 00:07:53.677 --- 10.0.0.2 ping statistics --- 00:07:53.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.677 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:53.677 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:53.677 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:07:53.677 00:07:53.677 --- 10.0.0.3 ping statistics --- 00:07:53.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.677 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:53.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:07:53.677 00:07:53.677 --- 10.0.0.1 ping statistics --- 00:07:53.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.677 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=65245 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 65245 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65245 ']' 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.677 12:32:19 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:53.935 [2024-07-12 12:32:19.793545] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:53.935 [2024-07-12 12:32:19.793696] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.935 [2024-07-12 12:32:19.942941] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.192 [2024-07-12 12:32:20.066618] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.192 [2024-07-12 12:32:20.066702] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.192 [2024-07-12 12:32:20.066730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.192 [2024-07-12 12:32:20.066741] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.192 [2024-07-12 12:32:20.066750] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.192 [2024-07-12 12:32:20.066915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.192 [2024-07-12 12:32:20.067065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.193 [2024-07-12 12:32:20.067166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:54.193 [2024-07-12 12:32:20.067169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.193 [2024-07-12 12:32:20.124936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.758 [2024-07-12 12:32:20.778119] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.758 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.016 Malloc0 00:07:55.016 [2024-07-12 12:32:20.861967] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65299 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65299 /var/tmp/bdevperf.sock 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 65299 ']' 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:55.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:55.016 { 00:07:55.016 "params": { 00:07:55.016 "name": "Nvme$subsystem", 00:07:55.016 "trtype": "$TEST_TRANSPORT", 00:07:55.016 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:55.016 "adrfam": "ipv4", 00:07:55.016 "trsvcid": "$NVMF_PORT", 00:07:55.016 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:55.016 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:55.016 "hdgst": ${hdgst:-false}, 00:07:55.016 "ddgst": ${ddgst:-false} 00:07:55.016 }, 00:07:55.016 "method": "bdev_nvme_attach_controller" 00:07:55.016 } 00:07:55.016 EOF 00:07:55.016 )") 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:55.016 12:32:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:55.016 "params": { 00:07:55.016 "name": "Nvme0", 00:07:55.016 "trtype": "tcp", 00:07:55.016 "traddr": "10.0.0.2", 00:07:55.016 "adrfam": "ipv4", 00:07:55.016 "trsvcid": "4420", 00:07:55.016 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:55.016 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:55.016 "hdgst": false, 00:07:55.016 "ddgst": false 00:07:55.016 }, 00:07:55.016 "method": "bdev_nvme_attach_controller" 00:07:55.016 }' 00:07:55.016 [2024-07-12 12:32:20.977048] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:55.016 [2024-07-12 12:32:20.977145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65299 ] 00:07:55.273 [2024-07-12 12:32:21.127053] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.273 [2024-07-12 12:32:21.229686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.273 [2024-07-12 12:32:21.291654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:55.531 Running I/O for 10 seconds... 00:07:56.096 12:32:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.096 12:32:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:56.096 12:32:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:56.096 12:32:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.096 12:32:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.096 12:32:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.096 12:32:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.096 12:32:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:56.096 [2024-07-12 12:32:22.078002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.096 [2024-07-12 12:32:22.078053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.096 [2024-07-12 12:32:22.078078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.096 [2024-07-12 12:32:22.078089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.096 [2024-07-12 12:32:22.078101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.096 [2024-07-12 12:32:22.078111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.097 [2024-07-12 12:32:22.078955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.097 [2024-07-12 12:32:22.078964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.078975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.078985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.078996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:56.098 [2024-07-12 12:32:22.079479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1490ec0 is same with the state(5) to be set 00:07:56.098 [2024-07-12 12:32:22.079557] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1490ec0 was disconnected and freed. reset controller. 00:07:56.098 [2024-07-12 12:32:22.079657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.098 [2024-07-12 12:32:22.079674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.098 [2024-07-12 12:32:22.079695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.098 [2024-07-12 12:32:22.079715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:56.098 [2024-07-12 12:32:22.079740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:56.098 [2024-07-12 12:32:22.079750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1488d50 is same with the state(5) to be set 00:07:56.098 [2024-07-12 12:32:22.080859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:56.098 task offset: 0 on job bdev=Nvme0n1 fails 00:07:56.098 00:07:56.098 Latency(us) 00:07:56.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.098 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:56.098 Job: Nvme0n1 ended in about 0.67 seconds with error 00:07:56.098 Verification LBA range: start 0x0 length 0x400 00:07:56.098 Nvme0n1 : 0.67 1519.15 94.95 94.95 0.00 38660.03 2204.39 37415.10 00:07:56.098 =================================================================================================================== 00:07:56.098 Total : 1519.15 94.95 94.95 0.00 38660.03 2204.39 37415.10 00:07:56.098 [2024-07-12 12:32:22.082804] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.098 [2024-07-12 12:32:22.082834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1488d50 (9): Bad file descriptor 00:07:56.098 [2024-07-12 12:32:22.094271] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:57.031 12:32:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65299 00:07:57.031 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65299) - No such process 00:07:57.031 12:32:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:57.031 12:32:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:57.031 12:32:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:57.031 12:32:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:57.031 12:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:57.031 12:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:57.031 12:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:57.031 12:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:57.031 { 00:07:57.031 "params": { 00:07:57.031 "name": "Nvme$subsystem", 00:07:57.031 "trtype": "$TEST_TRANSPORT", 00:07:57.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.031 "adrfam": "ipv4", 00:07:57.031 "trsvcid": "$NVMF_PORT", 00:07:57.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.031 "hdgst": ${hdgst:-false}, 00:07:57.031 "ddgst": ${ddgst:-false} 00:07:57.031 }, 00:07:57.031 "method": "bdev_nvme_attach_controller" 00:07:57.031 } 00:07:57.031 EOF 00:07:57.031 )") 00:07:57.031 12:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:57.031 12:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:57.031 12:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:57.031 12:32:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:57.031 "params": { 00:07:57.031 "name": "Nvme0", 00:07:57.031 "trtype": "tcp", 00:07:57.031 "traddr": "10.0.0.2", 00:07:57.031 "adrfam": "ipv4", 00:07:57.031 "trsvcid": "4420", 00:07:57.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:57.031 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:57.031 "hdgst": false, 00:07:57.031 "ddgst": false 00:07:57.031 }, 00:07:57.031 "method": "bdev_nvme_attach_controller" 00:07:57.031 }' 00:07:57.288 [2024-07-12 12:32:23.132032] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:57.288 [2024-07-12 12:32:23.132108] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65337 ] 00:07:57.288 [2024-07-12 12:32:23.266844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.288 [2024-07-12 12:32:23.353468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.546 [2024-07-12 12:32:23.414624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:07:57.546 Running I/O for 1 seconds... 00:07:58.918 00:07:58.918 Latency(us) 00:07:58.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.918 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:58.918 Verification LBA range: start 0x0 length 0x400 00:07:58.918 Nvme0n1 : 1.02 1623.91 101.49 0.00 0.00 38656.30 4021.53 36223.53 00:07:58.918 =================================================================================================================== 00:07:58.918 Total : 1623.91 101.49 0.00 0.00 38656.30 4021.53 36223.53 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:58.918 rmmod nvme_tcp 00:07:58.918 rmmod nvme_fabrics 00:07:58.918 rmmod nvme_keyring 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 65245 ']' 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 65245 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 65245 ']' 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 65245 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65245 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:58.918 killing process with pid 65245 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65245' 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 65245 00:07:58.918 12:32:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 65245 00:07:59.175 [2024-07-12 12:32:25.158448] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:59.175 12:32:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:59.175 12:32:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:59.175 12:32:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:59.175 12:32:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:59.175 12:32:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:59.175 12:32:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.175 12:32:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.175 12:32:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.175 12:32:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:59.175 12:32:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:59.175 ************************************ 00:07:59.175 END TEST nvmf_host_management 00:07:59.175 ************************************ 00:07:59.175 00:07:59.175 real 0m6.030s 00:07:59.175 user 0m23.089s 00:07:59.175 sys 0m1.561s 00:07:59.175 12:32:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.175 12:32:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:59.432 12:32:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:59.432 12:32:25 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:59.432 12:32:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:59.432 12:32:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.432 12:32:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:59.432 ************************************ 00:07:59.432 START TEST nvmf_lvol 00:07:59.432 ************************************ 00:07:59.432 12:32:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:59.432 * Looking for test storage... 00:07:59.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:59.433 Cannot find device "nvmf_tgt_br" 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:59.433 Cannot find device "nvmf_tgt_br2" 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:59.433 Cannot find device "nvmf_tgt_br" 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:59.433 Cannot find device "nvmf_tgt_br2" 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:59.433 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:59.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:59.691 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:59.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:59.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:07:59.691 00:07:59.691 --- 10.0.0.2 ping statistics --- 00:07:59.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.691 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:59.691 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:59.691 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:07:59.691 00:07:59.691 --- 10.0.0.3 ping statistics --- 00:07:59.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.691 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:59.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:59.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:59.691 00:07:59.691 --- 10.0.0.1 ping statistics --- 00:07:59.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:59.691 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=65546 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 65546 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 65546 ']' 00:07:59.691 12:32:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.692 12:32:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.692 12:32:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.692 12:32:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.692 12:32:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:59.949 [2024-07-12 12:32:25.781807] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:07:59.949 [2024-07-12 12:32:25.781915] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.949 [2024-07-12 12:32:25.923987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.207 [2024-07-12 12:32:26.045808] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.207 [2024-07-12 12:32:26.045890] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.208 [2024-07-12 12:32:26.045918] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.208 [2024-07-12 12:32:26.045929] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.208 [2024-07-12 12:32:26.045939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.208 [2024-07-12 12:32:26.046140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.208 [2024-07-12 12:32:26.046310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.208 [2024-07-12 12:32:26.046304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.208 [2024-07-12 12:32:26.103597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:00.772 12:32:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.772 12:32:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:00.772 12:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:00.772 12:32:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:00.772 12:32:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:00.772 12:32:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.772 12:32:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:01.029 [2024-07-12 12:32:27.023929] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.029 12:32:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:01.287 12:32:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:01.287 12:32:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:01.546 12:32:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:01.546 12:32:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:01.804 12:32:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:02.061 12:32:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9722bbee-cb02-4d1d-97aa-155387ddff1e 00:08:02.061 12:32:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9722bbee-cb02-4d1d-97aa-155387ddff1e lvol 20 00:08:02.318 12:32:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=dce5386d-3cb6-4a7a-899c-5676c1f07f7a 00:08:02.318 12:32:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:02.575 12:32:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dce5386d-3cb6-4a7a-899c-5676c1f07f7a 00:08:02.843 12:32:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:03.132 [2024-07-12 12:32:28.986643] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.132 12:32:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:03.390 12:32:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65622 00:08:03.390 12:32:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:03.390 12:32:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:04.324 12:32:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot dce5386d-3cb6-4a7a-899c-5676c1f07f7a MY_SNAPSHOT 00:08:04.581 12:32:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5bba0676-dad5-4ec2-9fe9-719187234e92 00:08:04.581 12:32:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize dce5386d-3cb6-4a7a-899c-5676c1f07f7a 30 00:08:04.839 12:32:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 5bba0676-dad5-4ec2-9fe9-719187234e92 MY_CLONE 00:08:05.096 12:32:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=96304747-e713-40a1-b188-36962534bab1 00:08:05.096 12:32:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 96304747-e713-40a1-b188-36962534bab1 00:08:05.661 12:32:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65622 00:08:13.844 Initializing NVMe Controllers 00:08:13.844 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:13.844 Controller IO queue size 128, less than required. 00:08:13.844 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:13.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:13.844 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:13.844 Initialization complete. Launching workers. 00:08:13.844 ======================================================== 00:08:13.844 Latency(us) 00:08:13.844 Device Information : IOPS MiB/s Average min max 00:08:13.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10374.40 40.52 12340.36 2687.20 71684.44 00:08:13.844 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10303.20 40.25 12422.11 3495.38 70123.88 00:08:13.844 ======================================================== 00:08:13.844 Total : 20677.60 80.77 12381.09 2687.20 71684.44 00:08:13.844 00:08:13.844 12:32:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:14.102 12:32:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete dce5386d-3cb6-4a7a-899c-5676c1f07f7a 00:08:14.360 12:32:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9722bbee-cb02-4d1d-97aa-155387ddff1e 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.618 rmmod nvme_tcp 00:08:14.618 rmmod nvme_fabrics 00:08:14.618 rmmod nvme_keyring 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 65546 ']' 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 65546 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 65546 ']' 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 65546 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 65546 00:08:14.618 killing process with pid 65546 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65546' 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 65546 00:08:14.618 12:32:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 65546 00:08:14.876 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.876 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.876 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.876 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.876 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.876 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.876 12:32:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.876 12:32:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.135 12:32:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:15.135 ************************************ 00:08:15.135 END TEST nvmf_lvol 00:08:15.135 ************************************ 00:08:15.135 00:08:15.135 real 0m15.698s 00:08:15.135 user 1m5.000s 00:08:15.135 sys 0m4.298s 00:08:15.135 12:32:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.135 12:32:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:15.135 12:32:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:15.135 12:32:41 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:15.135 12:32:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:15.135 12:32:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.135 12:32:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:15.135 ************************************ 00:08:15.135 START TEST nvmf_lvs_grow 00:08:15.135 ************************************ 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:15.135 * Looking for test storage... 00:08:15.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:15.135 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:15.136 Cannot find device "nvmf_tgt_br" 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:15.136 Cannot find device "nvmf_tgt_br2" 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:15.136 Cannot find device "nvmf_tgt_br" 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:15.136 Cannot find device "nvmf_tgt_br2" 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:08:15.136 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:15.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:15.393 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:15.393 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:15.394 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:15.394 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:15.394 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:15.394 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:15.394 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:15.394 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:15.394 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:15.394 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:15.394 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:15.394 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:15.394 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:15.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:08:15.394 00:08:15.394 --- 10.0.0.2 ping statistics --- 00:08:15.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.394 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:08:15.394 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:15.394 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:15.394 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:08:15.394 00:08:15.394 --- 10.0.0.3 ping statistics --- 00:08:15.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.394 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:08:15.394 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:15.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:08:15.394 00:08:15.394 --- 10.0.0.1 ping statistics --- 00:08:15.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.394 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:08:15.394 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=65955 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 65955 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 65955 ']' 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.651 12:32:41 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.651 [2024-07-12 12:32:41.558530] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:08:15.651 [2024-07-12 12:32:41.558623] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.651 [2024-07-12 12:32:41.698981] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.908 [2024-07-12 12:32:41.829749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.908 [2024-07-12 12:32:41.829824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.908 [2024-07-12 12:32:41.829839] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.908 [2024-07-12 12:32:41.829850] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.908 [2024-07-12 12:32:41.829859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.908 [2024-07-12 12:32:41.829892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.908 [2024-07-12 12:32:41.887541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:16.841 12:32:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.841 12:32:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:16.841 12:32:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:16.841 12:32:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.841 12:32:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.841 12:32:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.841 12:32:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:17.100 [2024-07-12 12:32:42.917605] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.100 12:32:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:17.100 12:32:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:17.100 12:32:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.100 12:32:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:17.100 ************************************ 00:08:17.100 START TEST lvs_grow_clean 00:08:17.100 ************************************ 00:08:17.100 12:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:17.100 12:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:17.100 12:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:17.100 12:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:17.100 12:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:17.100 12:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:17.100 12:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:17.100 12:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:17.100 12:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:17.100 12:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:17.357 12:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:17.357 12:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:17.614 12:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7df9371c-5307-4d7f-a037-2b7271c59b15 00:08:17.614 12:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7df9371c-5307-4d7f-a037-2b7271c59b15 00:08:17.614 12:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:17.905 12:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:17.905 12:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:17.905 12:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7df9371c-5307-4d7f-a037-2b7271c59b15 lvol 150 00:08:17.905 12:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=50c4ae1d-bf90-4849-9c11-d2f9e034ef8d 00:08:17.905 12:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:18.232 12:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:18.232 [2024-07-12 12:32:44.160201] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:18.232 [2024-07-12 12:32:44.160306] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:18.232 true 00:08:18.232 12:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7df9371c-5307-4d7f-a037-2b7271c59b15 00:08:18.232 12:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:18.489 12:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:18.489 12:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:18.746 12:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 50c4ae1d-bf90-4849-9c11-d2f9e034ef8d 00:08:19.003 12:32:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:19.003 [2024-07-12 12:32:45.036756] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.003 12:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:19.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:19.260 12:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66038 00:08:19.260 12:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:19.260 12:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:19.260 12:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66038 /var/tmp/bdevperf.sock 00:08:19.260 12:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 66038 ']' 00:08:19.260 12:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:19.260 12:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:19.260 12:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:19.260 12:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:19.260 12:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:19.517 [2024-07-12 12:32:45.374636] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:08:19.517 [2024-07-12 12:32:45.375047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66038 ] 00:08:19.517 [2024-07-12 12:32:45.505695] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.774 [2024-07-12 12:32:45.623512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.774 [2024-07-12 12:32:45.678604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:20.336 12:32:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.336 12:32:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:20.336 12:32:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:20.593 Nvme0n1 00:08:20.593 12:32:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:20.850 [ 00:08:20.850 { 00:08:20.850 "name": "Nvme0n1", 00:08:20.850 "aliases": [ 00:08:20.850 "50c4ae1d-bf90-4849-9c11-d2f9e034ef8d" 00:08:20.850 ], 00:08:20.850 "product_name": "NVMe disk", 00:08:20.850 "block_size": 4096, 00:08:20.850 "num_blocks": 38912, 00:08:20.850 "uuid": "50c4ae1d-bf90-4849-9c11-d2f9e034ef8d", 00:08:20.850 "assigned_rate_limits": { 00:08:20.850 "rw_ios_per_sec": 0, 00:08:20.850 "rw_mbytes_per_sec": 0, 00:08:20.850 "r_mbytes_per_sec": 0, 00:08:20.850 "w_mbytes_per_sec": 0 00:08:20.850 }, 00:08:20.850 "claimed": false, 00:08:20.850 "zoned": false, 00:08:20.850 "supported_io_types": { 00:08:20.850 "read": true, 00:08:20.850 "write": true, 00:08:20.850 "unmap": true, 00:08:20.850 "flush": true, 00:08:20.850 "reset": true, 00:08:20.850 "nvme_admin": true, 00:08:20.850 "nvme_io": true, 00:08:20.850 "nvme_io_md": false, 00:08:20.850 "write_zeroes": true, 00:08:20.850 "zcopy": false, 00:08:20.850 "get_zone_info": false, 00:08:20.850 "zone_management": false, 00:08:20.850 "zone_append": false, 00:08:20.850 "compare": true, 00:08:20.850 "compare_and_write": true, 00:08:20.850 "abort": true, 00:08:20.850 "seek_hole": false, 00:08:20.850 "seek_data": false, 00:08:20.850 "copy": true, 00:08:20.850 "nvme_iov_md": false 00:08:20.850 }, 00:08:20.850 "memory_domains": [ 00:08:20.850 { 00:08:20.850 "dma_device_id": "system", 00:08:20.850 "dma_device_type": 1 00:08:20.850 } 00:08:20.850 ], 00:08:20.850 "driver_specific": { 00:08:20.850 "nvme": [ 00:08:20.850 { 00:08:20.850 "trid": { 00:08:20.850 "trtype": "TCP", 00:08:20.850 "adrfam": "IPv4", 00:08:20.850 "traddr": "10.0.0.2", 00:08:20.850 "trsvcid": "4420", 00:08:20.850 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:20.850 }, 00:08:20.850 "ctrlr_data": { 00:08:20.850 "cntlid": 1, 00:08:20.850 "vendor_id": "0x8086", 00:08:20.850 "model_number": "SPDK bdev Controller", 00:08:20.850 "serial_number": "SPDK0", 00:08:20.850 "firmware_revision": "24.09", 00:08:20.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:20.850 "oacs": { 00:08:20.850 "security": 0, 00:08:20.850 "format": 0, 00:08:20.850 "firmware": 0, 00:08:20.850 "ns_manage": 0 00:08:20.850 }, 00:08:20.850 "multi_ctrlr": true, 00:08:20.850 "ana_reporting": false 00:08:20.850 }, 00:08:20.850 "vs": { 00:08:20.850 "nvme_version": "1.3" 00:08:20.850 }, 00:08:20.850 "ns_data": { 00:08:20.850 "id": 1, 00:08:20.850 "can_share": true 00:08:20.850 } 00:08:20.850 } 00:08:20.850 ], 00:08:20.850 "mp_policy": "active_passive" 00:08:20.850 } 00:08:20.850 } 00:08:20.850 ] 00:08:20.850 12:32:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66061 00:08:20.850 12:32:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:20.850 12:32:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:21.106 Running I/O for 10 seconds... 00:08:22.037 Latency(us) 00:08:22.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.037 Nvme0n1 : 1.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:22.037 =================================================================================================================== 00:08:22.037 Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:08:22.037 00:08:22.968 12:32:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7df9371c-5307-4d7f-a037-2b7271c59b15 00:08:22.968 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.968 Nvme0n1 : 2.00 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:08:22.968 =================================================================================================================== 00:08:22.968 Total : 7239.00 28.28 0.00 0.00 0.00 0.00 0.00 00:08:22.968 00:08:23.225 true 00:08:23.225 12:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7df9371c-5307-4d7f-a037-2b7271c59b15 00:08:23.225 12:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:23.482 12:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:23.482 12:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:23.482 12:32:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66061 00:08:24.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.045 Nvme0n1 : 3.00 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:08:24.045 =================================================================================================================== 00:08:24.045 Total : 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:08:24.045 00:08:24.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.991 Nvme0n1 : 4.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:24.991 =================================================================================================================== 00:08:24.991 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:24.991 00:08:25.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.934 Nvme0n1 : 5.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:25.934 =================================================================================================================== 00:08:25.934 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:25.934 00:08:27.308 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.308 Nvme0n1 : 6.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:27.308 =================================================================================================================== 00:08:27.308 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:27.308 00:08:28.242 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.242 Nvme0n1 : 7.00 7384.14 28.84 0.00 0.00 0.00 0.00 0.00 00:08:28.242 =================================================================================================================== 00:08:28.242 Total : 7384.14 28.84 0.00 0.00 0.00 0.00 0.00 00:08:28.242 00:08:29.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.176 Nvme0n1 : 8.00 7381.88 28.84 0.00 0.00 0.00 0.00 0.00 00:08:29.176 =================================================================================================================== 00:08:29.176 Total : 7381.88 28.84 0.00 0.00 0.00 0.00 0.00 00:08:29.176 00:08:30.110 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.110 Nvme0n1 : 9.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:30.110 =================================================================================================================== 00:08:30.110 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:30.110 00:08:31.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.044 Nvme0n1 : 10.00 7340.60 28.67 0.00 0.00 0.00 0.00 0.00 00:08:31.044 =================================================================================================================== 00:08:31.044 Total : 7340.60 28.67 0.00 0.00 0.00 0.00 0.00 00:08:31.044 00:08:31.044 00:08:31.044 Latency(us) 00:08:31.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.044 Nvme0n1 : 10.00 7337.74 28.66 0.00 0.00 17435.11 14358.34 45994.36 00:08:31.044 =================================================================================================================== 00:08:31.044 Total : 7337.74 28.66 0.00 0.00 17435.11 14358.34 45994.36 00:08:31.044 0 00:08:31.044 12:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66038 00:08:31.044 12:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 66038 ']' 00:08:31.044 12:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 66038 00:08:31.044 12:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:31.044 12:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:31.044 12:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66038 00:08:31.044 12:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:31.044 12:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:31.045 12:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66038' 00:08:31.045 killing process with pid 66038 00:08:31.045 12:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 66038 00:08:31.045 Received shutdown signal, test time was about 10.000000 seconds 00:08:31.045 00:08:31.045 Latency(us) 00:08:31.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.045 =================================================================================================================== 00:08:31.045 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:31.045 12:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 66038 00:08:31.303 12:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.560 12:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:31.818 12:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7df9371c-5307-4d7f-a037-2b7271c59b15 00:08:31.818 12:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:32.076 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:32.076 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:32.076 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:32.334 [2024-07-12 12:32:58.350060] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:32.334 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7df9371c-5307-4d7f-a037-2b7271c59b15 00:08:32.334 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:32.334 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7df9371c-5307-4d7f-a037-2b7271c59b15 00:08:32.334 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:32.334 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.334 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:32.334 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.334 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:32.334 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:32.334 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:32.334 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:32.334 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7df9371c-5307-4d7f-a037-2b7271c59b15 00:08:32.591 request: 00:08:32.591 { 00:08:32.591 "uuid": "7df9371c-5307-4d7f-a037-2b7271c59b15", 00:08:32.591 "method": "bdev_lvol_get_lvstores", 00:08:32.591 "req_id": 1 00:08:32.591 } 00:08:32.591 Got JSON-RPC error response 00:08:32.591 response: 00:08:32.591 { 00:08:32.591 "code": -19, 00:08:32.591 "message": "No such device" 00:08:32.591 } 00:08:32.591 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:32.591 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:32.591 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:32.591 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:32.591 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:32.849 aio_bdev 00:08:32.849 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 50c4ae1d-bf90-4849-9c11-d2f9e034ef8d 00:08:32.849 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=50c4ae1d-bf90-4849-9c11-d2f9e034ef8d 00:08:32.849 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:32.849 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:32.849 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:32.849 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:32.849 12:32:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:33.106 12:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 50c4ae1d-bf90-4849-9c11-d2f9e034ef8d -t 2000 00:08:33.671 [ 00:08:33.671 { 00:08:33.671 "name": "50c4ae1d-bf90-4849-9c11-d2f9e034ef8d", 00:08:33.671 "aliases": [ 00:08:33.671 "lvs/lvol" 00:08:33.671 ], 00:08:33.671 "product_name": "Logical Volume", 00:08:33.671 "block_size": 4096, 00:08:33.671 "num_blocks": 38912, 00:08:33.671 "uuid": "50c4ae1d-bf90-4849-9c11-d2f9e034ef8d", 00:08:33.671 "assigned_rate_limits": { 00:08:33.671 "rw_ios_per_sec": 0, 00:08:33.671 "rw_mbytes_per_sec": 0, 00:08:33.671 "r_mbytes_per_sec": 0, 00:08:33.671 "w_mbytes_per_sec": 0 00:08:33.671 }, 00:08:33.672 "claimed": false, 00:08:33.672 "zoned": false, 00:08:33.672 "supported_io_types": { 00:08:33.672 "read": true, 00:08:33.672 "write": true, 00:08:33.672 "unmap": true, 00:08:33.672 "flush": false, 00:08:33.672 "reset": true, 00:08:33.672 "nvme_admin": false, 00:08:33.672 "nvme_io": false, 00:08:33.672 "nvme_io_md": false, 00:08:33.672 "write_zeroes": true, 00:08:33.672 "zcopy": false, 00:08:33.672 "get_zone_info": false, 00:08:33.672 "zone_management": false, 00:08:33.672 "zone_append": false, 00:08:33.672 "compare": false, 00:08:33.672 "compare_and_write": false, 00:08:33.672 "abort": false, 00:08:33.672 "seek_hole": true, 00:08:33.672 "seek_data": true, 00:08:33.672 "copy": false, 00:08:33.672 "nvme_iov_md": false 00:08:33.672 }, 00:08:33.672 "driver_specific": { 00:08:33.672 "lvol": { 00:08:33.672 "lvol_store_uuid": "7df9371c-5307-4d7f-a037-2b7271c59b15", 00:08:33.672 "base_bdev": "aio_bdev", 00:08:33.672 "thin_provision": false, 00:08:33.672 "num_allocated_clusters": 38, 00:08:33.672 "snapshot": false, 00:08:33.672 "clone": false, 00:08:33.672 "esnap_clone": false 00:08:33.672 } 00:08:33.672 } 00:08:33.672 } 00:08:33.672 ] 00:08:33.672 12:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:33.672 12:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:33.672 12:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7df9371c-5307-4d7f-a037-2b7271c59b15 00:08:33.929 12:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:33.929 12:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:33.929 12:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7df9371c-5307-4d7f-a037-2b7271c59b15 00:08:33.929 12:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:33.929 12:32:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 50c4ae1d-bf90-4849-9c11-d2f9e034ef8d 00:08:34.495 12:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7df9371c-5307-4d7f-a037-2b7271c59b15 00:08:34.495 12:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:34.753 12:33:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:35.011 ************************************ 00:08:35.011 END TEST lvs_grow_clean 00:08:35.011 ************************************ 00:08:35.011 00:08:35.011 real 0m18.107s 00:08:35.011 user 0m17.009s 00:08:35.011 sys 0m2.508s 00:08:35.011 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.011 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:35.269 12:33:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:35.270 12:33:01 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:35.270 12:33:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:35.270 12:33:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.270 12:33:01 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:35.270 ************************************ 00:08:35.270 START TEST lvs_grow_dirty 00:08:35.270 ************************************ 00:08:35.270 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:35.270 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:35.270 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:35.270 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:35.270 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:35.270 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:35.270 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:35.270 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:35.270 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:35.270 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:35.528 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:35.528 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:35.786 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=04354689-e472-4331-85c2-7368311bee93 00:08:35.786 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04354689-e472-4331-85c2-7368311bee93 00:08:35.786 12:33:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:36.045 12:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:36.045 12:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:36.045 12:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 04354689-e472-4331-85c2-7368311bee93 lvol 150 00:08:36.303 12:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7e791cb1-f144-4890-9dd3-bf05a3c6f5f3 00:08:36.303 12:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:36.303 12:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:36.560 [2024-07-12 12:33:02.455236] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:36.560 [2024-07-12 12:33:02.455379] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:36.560 true 00:08:36.560 12:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04354689-e472-4331-85c2-7368311bee93 00:08:36.560 12:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:36.818 12:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:36.818 12:33:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:37.076 12:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7e791cb1-f144-4890-9dd3-bf05a3c6f5f3 00:08:37.334 12:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:37.593 [2024-07-12 12:33:03.459912] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.593 12:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:37.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:37.851 12:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66311 00:08:37.851 12:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:37.851 12:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:37.851 12:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66311 /var/tmp/bdevperf.sock 00:08:37.851 12:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66311 ']' 00:08:37.851 12:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:37.851 12:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.851 12:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:37.851 12:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.851 12:33:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:37.851 [2024-07-12 12:33:03.804209] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:08:37.851 [2024-07-12 12:33:03.804545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66311 ] 00:08:38.110 [2024-07-12 12:33:03.939203] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.110 [2024-07-12 12:33:04.059073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.110 [2024-07-12 12:33:04.120070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:39.086 12:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:39.086 12:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:39.086 12:33:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:39.086 Nvme0n1 00:08:39.347 12:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:39.347 [ 00:08:39.347 { 00:08:39.347 "name": "Nvme0n1", 00:08:39.347 "aliases": [ 00:08:39.347 "7e791cb1-f144-4890-9dd3-bf05a3c6f5f3" 00:08:39.347 ], 00:08:39.347 "product_name": "NVMe disk", 00:08:39.347 "block_size": 4096, 00:08:39.347 "num_blocks": 38912, 00:08:39.347 "uuid": "7e791cb1-f144-4890-9dd3-bf05a3c6f5f3", 00:08:39.347 "assigned_rate_limits": { 00:08:39.347 "rw_ios_per_sec": 0, 00:08:39.347 "rw_mbytes_per_sec": 0, 00:08:39.347 "r_mbytes_per_sec": 0, 00:08:39.347 "w_mbytes_per_sec": 0 00:08:39.347 }, 00:08:39.347 "claimed": false, 00:08:39.347 "zoned": false, 00:08:39.347 "supported_io_types": { 00:08:39.347 "read": true, 00:08:39.347 "write": true, 00:08:39.347 "unmap": true, 00:08:39.347 "flush": true, 00:08:39.347 "reset": true, 00:08:39.347 "nvme_admin": true, 00:08:39.347 "nvme_io": true, 00:08:39.347 "nvme_io_md": false, 00:08:39.347 "write_zeroes": true, 00:08:39.347 "zcopy": false, 00:08:39.347 "get_zone_info": false, 00:08:39.347 "zone_management": false, 00:08:39.347 "zone_append": false, 00:08:39.347 "compare": true, 00:08:39.347 "compare_and_write": true, 00:08:39.347 "abort": true, 00:08:39.347 "seek_hole": false, 00:08:39.347 "seek_data": false, 00:08:39.347 "copy": true, 00:08:39.347 "nvme_iov_md": false 00:08:39.347 }, 00:08:39.347 "memory_domains": [ 00:08:39.347 { 00:08:39.347 "dma_device_id": "system", 00:08:39.347 "dma_device_type": 1 00:08:39.347 } 00:08:39.347 ], 00:08:39.347 "driver_specific": { 00:08:39.347 "nvme": [ 00:08:39.347 { 00:08:39.347 "trid": { 00:08:39.347 "trtype": "TCP", 00:08:39.347 "adrfam": "IPv4", 00:08:39.347 "traddr": "10.0.0.2", 00:08:39.347 "trsvcid": "4420", 00:08:39.347 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:39.347 }, 00:08:39.347 "ctrlr_data": { 00:08:39.347 "cntlid": 1, 00:08:39.347 "vendor_id": "0x8086", 00:08:39.347 "model_number": "SPDK bdev Controller", 00:08:39.347 "serial_number": "SPDK0", 00:08:39.347 "firmware_revision": "24.09", 00:08:39.347 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.347 "oacs": { 00:08:39.347 "security": 0, 00:08:39.347 "format": 0, 00:08:39.347 "firmware": 0, 00:08:39.347 "ns_manage": 0 00:08:39.347 }, 00:08:39.347 "multi_ctrlr": true, 00:08:39.347 "ana_reporting": false 00:08:39.347 }, 00:08:39.347 "vs": { 00:08:39.347 "nvme_version": "1.3" 00:08:39.347 }, 00:08:39.347 "ns_data": { 00:08:39.347 "id": 1, 00:08:39.347 "can_share": true 00:08:39.347 } 00:08:39.347 } 00:08:39.347 ], 00:08:39.347 "mp_policy": "active_passive" 00:08:39.347 } 00:08:39.347 } 00:08:39.347 ] 00:08:39.347 12:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:39.347 12:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66334 00:08:39.347 12:33:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:39.626 Running I/O for 10 seconds... 00:08:40.577 Latency(us) 00:08:40.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.577 Nvme0n1 : 1.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:40.577 =================================================================================================================== 00:08:40.577 Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:08:40.577 00:08:41.509 12:33:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 04354689-e472-4331-85c2-7368311bee93 00:08:41.509 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.509 Nvme0n1 : 2.00 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:08:41.509 =================================================================================================================== 00:08:41.509 Total : 7429.50 29.02 0.00 0.00 0.00 0.00 0.00 00:08:41.509 00:08:41.766 true 00:08:41.766 12:33:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04354689-e472-4331-85c2-7368311bee93 00:08:41.766 12:33:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:42.024 12:33:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:42.024 12:33:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:42.024 12:33:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66334 00:08:42.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.589 Nvme0n1 : 3.00 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:08:42.589 =================================================================================================================== 00:08:42.589 Total : 7450.67 29.10 0.00 0.00 0.00 0.00 0.00 00:08:42.589 00:08:43.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.523 Nvme0n1 : 4.00 7397.75 28.90 0.00 0.00 0.00 0.00 0.00 00:08:43.523 =================================================================================================================== 00:08:43.523 Total : 7397.75 28.90 0.00 0.00 0.00 0.00 0.00 00:08:43.523 00:08:44.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.513 Nvme0n1 : 5.00 7340.60 28.67 0.00 0.00 0.00 0.00 0.00 00:08:44.513 =================================================================================================================== 00:08:44.513 Total : 7340.60 28.67 0.00 0.00 0.00 0.00 0.00 00:08:44.513 00:08:45.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.887 Nvme0n1 : 6.00 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:08:45.887 =================================================================================================================== 00:08:45.887 Total : 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:08:45.887 00:08:46.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.454 Nvme0n1 : 7.00 7262.86 28.37 0.00 0.00 0.00 0.00 0.00 00:08:46.454 =================================================================================================================== 00:08:46.454 Total : 7262.86 28.37 0.00 0.00 0.00 0.00 0.00 00:08:46.454 00:08:47.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.844 Nvme0n1 : 8.00 7148.75 27.92 0.00 0.00 0.00 0.00 0.00 00:08:47.844 =================================================================================================================== 00:08:47.844 Total : 7148.75 27.92 0.00 0.00 0.00 0.00 0.00 00:08:47.844 00:08:48.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.777 Nvme0n1 : 9.00 7144.67 27.91 0.00 0.00 0.00 0.00 0.00 00:08:48.777 =================================================================================================================== 00:08:48.777 Total : 7144.67 27.91 0.00 0.00 0.00 0.00 0.00 00:08:48.777 00:08:49.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.717 Nvme0n1 : 10.00 7128.70 27.85 0.00 0.00 0.00 0.00 0.00 00:08:49.717 =================================================================================================================== 00:08:49.717 Total : 7128.70 27.85 0.00 0.00 0.00 0.00 0.00 00:08:49.717 00:08:49.717 00:08:49.717 Latency(us) 00:08:49.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.717 Nvme0n1 : 10.01 7133.92 27.87 0.00 0.00 17937.53 14417.92 142987.64 00:08:49.717 =================================================================================================================== 00:08:49.717 Total : 7133.92 27.87 0.00 0.00 17937.53 14417.92 142987.64 00:08:49.717 0 00:08:49.717 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66311 00:08:49.717 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 66311 ']' 00:08:49.717 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 66311 00:08:49.717 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:49.717 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:49.717 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66311 00:08:49.717 killing process with pid 66311 00:08:49.717 Received shutdown signal, test time was about 10.000000 seconds 00:08:49.717 00:08:49.717 Latency(us) 00:08:49.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.717 =================================================================================================================== 00:08:49.717 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:49.717 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:49.717 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:49.717 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66311' 00:08:49.717 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 66311 00:08:49.717 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 66311 00:08:49.974 12:33:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:49.974 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:50.538 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:50.538 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04354689-e472-4331-85c2-7368311bee93 00:08:50.796 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65955 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65955 00:08:50.797 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65955 Killed "${NVMF_APP[@]}" "$@" 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:50.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=66467 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 66467 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 66467 ']' 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:50.797 12:33:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:50.797 [2024-07-12 12:33:16.727493] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:08:50.797 [2024-07-12 12:33:16.727984] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.056 [2024-07-12 12:33:16.873578] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.056 [2024-07-12 12:33:16.984108] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.056 [2024-07-12 12:33:16.984389] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.056 [2024-07-12 12:33:16.984434] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.056 [2024-07-12 12:33:16.984445] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.056 [2024-07-12 12:33:16.984453] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.056 [2024-07-12 12:33:16.984481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.056 [2024-07-12 12:33:17.041626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:51.990 12:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.990 12:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:51.990 12:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:51.990 12:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.990 12:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:51.990 12:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.990 12:33:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:51.990 [2024-07-12 12:33:18.003567] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:51.990 [2024-07-12 12:33:18.004907] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:51.990 [2024-07-12 12:33:18.005159] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:51.990 12:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:51.990 12:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7e791cb1-f144-4890-9dd3-bf05a3c6f5f3 00:08:51.990 12:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=7e791cb1-f144-4890-9dd3-bf05a3c6f5f3 00:08:51.990 12:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:51.990 12:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:51.990 12:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:51.990 12:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:51.990 12:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:52.249 12:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7e791cb1-f144-4890-9dd3-bf05a3c6f5f3 -t 2000 00:08:52.508 [ 00:08:52.508 { 00:08:52.508 "name": "7e791cb1-f144-4890-9dd3-bf05a3c6f5f3", 00:08:52.508 "aliases": [ 00:08:52.508 "lvs/lvol" 00:08:52.508 ], 00:08:52.508 "product_name": "Logical Volume", 00:08:52.508 "block_size": 4096, 00:08:52.508 "num_blocks": 38912, 00:08:52.508 "uuid": "7e791cb1-f144-4890-9dd3-bf05a3c6f5f3", 00:08:52.508 "assigned_rate_limits": { 00:08:52.508 "rw_ios_per_sec": 0, 00:08:52.508 "rw_mbytes_per_sec": 0, 00:08:52.508 "r_mbytes_per_sec": 0, 00:08:52.508 "w_mbytes_per_sec": 0 00:08:52.508 }, 00:08:52.508 "claimed": false, 00:08:52.508 "zoned": false, 00:08:52.508 "supported_io_types": { 00:08:52.508 "read": true, 00:08:52.508 "write": true, 00:08:52.508 "unmap": true, 00:08:52.508 "flush": false, 00:08:52.508 "reset": true, 00:08:52.508 "nvme_admin": false, 00:08:52.508 "nvme_io": false, 00:08:52.508 "nvme_io_md": false, 00:08:52.508 "write_zeroes": true, 00:08:52.508 "zcopy": false, 00:08:52.508 "get_zone_info": false, 00:08:52.508 "zone_management": false, 00:08:52.508 "zone_append": false, 00:08:52.508 "compare": false, 00:08:52.508 "compare_and_write": false, 00:08:52.508 "abort": false, 00:08:52.508 "seek_hole": true, 00:08:52.508 "seek_data": true, 00:08:52.508 "copy": false, 00:08:52.508 "nvme_iov_md": false 00:08:52.508 }, 00:08:52.508 "driver_specific": { 00:08:52.508 "lvol": { 00:08:52.508 "lvol_store_uuid": "04354689-e472-4331-85c2-7368311bee93", 00:08:52.508 "base_bdev": "aio_bdev", 00:08:52.508 "thin_provision": false, 00:08:52.508 "num_allocated_clusters": 38, 00:08:52.508 "snapshot": false, 00:08:52.508 "clone": false, 00:08:52.508 "esnap_clone": false 00:08:52.508 } 00:08:52.508 } 00:08:52.508 } 00:08:52.508 ] 00:08:52.508 12:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:52.508 12:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04354689-e472-4331-85c2-7368311bee93 00:08:52.508 12:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:52.766 12:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:52.766 12:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04354689-e472-4331-85c2-7368311bee93 00:08:52.766 12:33:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:53.025 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:53.025 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:53.284 [2024-07-12 12:33:19.269419] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:53.284 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04354689-e472-4331-85c2-7368311bee93 00:08:53.284 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:08:53.284 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04354689-e472-4331-85c2-7368311bee93 00:08:53.284 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.284 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.284 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.284 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.284 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.284 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:53.284 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:53.284 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:53.284 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04354689-e472-4331-85c2-7368311bee93 00:08:53.542 request: 00:08:53.542 { 00:08:53.542 "uuid": "04354689-e472-4331-85c2-7368311bee93", 00:08:53.542 "method": "bdev_lvol_get_lvstores", 00:08:53.542 "req_id": 1 00:08:53.542 } 00:08:53.542 Got JSON-RPC error response 00:08:53.542 response: 00:08:53.542 { 00:08:53.542 "code": -19, 00:08:53.542 "message": "No such device" 00:08:53.542 } 00:08:53.542 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:08:53.542 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:53.542 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:53.542 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:53.542 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:53.801 aio_bdev 00:08:53.801 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7e791cb1-f144-4890-9dd3-bf05a3c6f5f3 00:08:53.801 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=7e791cb1-f144-4890-9dd3-bf05a3c6f5f3 00:08:53.801 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:53.801 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:53.801 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:53.801 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:53.801 12:33:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:54.059 12:33:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7e791cb1-f144-4890-9dd3-bf05a3c6f5f3 -t 2000 00:08:54.318 [ 00:08:54.318 { 00:08:54.318 "name": "7e791cb1-f144-4890-9dd3-bf05a3c6f5f3", 00:08:54.318 "aliases": [ 00:08:54.318 "lvs/lvol" 00:08:54.318 ], 00:08:54.318 "product_name": "Logical Volume", 00:08:54.318 "block_size": 4096, 00:08:54.318 "num_blocks": 38912, 00:08:54.318 "uuid": "7e791cb1-f144-4890-9dd3-bf05a3c6f5f3", 00:08:54.318 "assigned_rate_limits": { 00:08:54.318 "rw_ios_per_sec": 0, 00:08:54.318 "rw_mbytes_per_sec": 0, 00:08:54.318 "r_mbytes_per_sec": 0, 00:08:54.318 "w_mbytes_per_sec": 0 00:08:54.318 }, 00:08:54.318 "claimed": false, 00:08:54.318 "zoned": false, 00:08:54.318 "supported_io_types": { 00:08:54.318 "read": true, 00:08:54.318 "write": true, 00:08:54.318 "unmap": true, 00:08:54.318 "flush": false, 00:08:54.318 "reset": true, 00:08:54.318 "nvme_admin": false, 00:08:54.318 "nvme_io": false, 00:08:54.318 "nvme_io_md": false, 00:08:54.318 "write_zeroes": true, 00:08:54.318 "zcopy": false, 00:08:54.318 "get_zone_info": false, 00:08:54.318 "zone_management": false, 00:08:54.318 "zone_append": false, 00:08:54.318 "compare": false, 00:08:54.318 "compare_and_write": false, 00:08:54.318 "abort": false, 00:08:54.318 "seek_hole": true, 00:08:54.318 "seek_data": true, 00:08:54.318 "copy": false, 00:08:54.318 "nvme_iov_md": false 00:08:54.318 }, 00:08:54.318 "driver_specific": { 00:08:54.318 "lvol": { 00:08:54.318 "lvol_store_uuid": "04354689-e472-4331-85c2-7368311bee93", 00:08:54.318 "base_bdev": "aio_bdev", 00:08:54.318 "thin_provision": false, 00:08:54.318 "num_allocated_clusters": 38, 00:08:54.318 "snapshot": false, 00:08:54.318 "clone": false, 00:08:54.318 "esnap_clone": false 00:08:54.318 } 00:08:54.318 } 00:08:54.318 } 00:08:54.318 ] 00:08:54.318 12:33:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:54.318 12:33:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04354689-e472-4331-85c2-7368311bee93 00:08:54.318 12:33:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:54.576 12:33:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:54.576 12:33:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 04354689-e472-4331-85c2-7368311bee93 00:08:54.576 12:33:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:54.835 12:33:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:54.835 12:33:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7e791cb1-f144-4890-9dd3-bf05a3c6f5f3 00:08:55.094 12:33:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 04354689-e472-4331-85c2-7368311bee93 00:08:55.352 12:33:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:55.610 12:33:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:55.869 ************************************ 00:08:55.869 END TEST lvs_grow_dirty 00:08:55.869 ************************************ 00:08:55.869 00:08:55.869 real 0m20.795s 00:08:55.869 user 0m43.336s 00:08:55.869 sys 0m8.606s 00:08:55.869 12:33:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.869 12:33:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:56.127 12:33:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:56.127 12:33:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:56.127 12:33:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:08:56.127 12:33:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:08:56.127 12:33:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:56.127 12:33:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:56.127 12:33:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:56.127 12:33:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:56.127 12:33:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:56.127 12:33:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:56.127 nvmf_trace.0 00:08:56.127 12:33:21 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:08:56.127 12:33:21 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:56.127 12:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:56.127 12:33:21 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:56.386 rmmod nvme_tcp 00:08:56.386 rmmod nvme_fabrics 00:08:56.386 rmmod nvme_keyring 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 66467 ']' 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 66467 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 66467 ']' 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 66467 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66467 00:08:56.386 killing process with pid 66467 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66467' 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 66467 00:08:56.386 12:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 66467 00:08:56.645 12:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:56.645 12:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:56.645 12:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:56.645 12:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:56.645 12:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:56.645 12:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.645 12:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.645 12:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.645 12:33:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:56.645 ************************************ 00:08:56.645 END TEST nvmf_lvs_grow 00:08:56.645 ************************************ 00:08:56.645 00:08:56.645 real 0m41.546s 00:08:56.645 user 1m6.831s 00:08:56.645 sys 0m11.931s 00:08:56.645 12:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.645 12:33:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.645 12:33:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:56.645 12:33:22 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:56.645 12:33:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:56.645 12:33:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.645 12:33:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:56.645 ************************************ 00:08:56.645 START TEST nvmf_bdev_io_wait 00:08:56.645 ************************************ 00:08:56.645 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:56.645 * Looking for test storage... 00:08:56.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:56.645 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:56.645 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:56.645 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.645 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.645 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.645 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.645 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.645 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.645 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.645 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.645 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.645 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:56.971 Cannot find device "nvmf_tgt_br" 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:56.971 Cannot find device "nvmf_tgt_br2" 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:56.971 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:56.971 Cannot find device "nvmf_tgt_br" 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:56.972 Cannot find device "nvmf_tgt_br2" 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:56.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:56.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:56.972 12:33:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:56.972 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:56.972 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:56.972 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:56.972 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:56.972 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:57.229 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:57.229 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:57.229 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:57.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:08:57.229 00:08:57.229 --- 10.0.0.2 ping statistics --- 00:08:57.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.229 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:57.229 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:57.229 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:57.229 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:08:57.229 00:08:57.229 --- 10.0.0.3 ping statistics --- 00:08:57.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.229 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:08:57.229 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:57.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:57.229 00:08:57.229 --- 10.0.0.1 ping statistics --- 00:08:57.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.229 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:57.229 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.229 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:08:57.229 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:57.229 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.229 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=66788 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 66788 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 66788 ']' 00:08:57.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.230 12:33:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.230 [2024-07-12 12:33:23.162839] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:08:57.230 [2024-07-12 12:33:23.162950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.487 [2024-07-12 12:33:23.304880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.487 [2024-07-12 12:33:23.435828] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.487 [2024-07-12 12:33:23.436163] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.487 [2024-07-12 12:33:23.436343] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.487 [2024-07-12 12:33:23.436509] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.487 [2024-07-12 12:33:23.436561] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.487 [2024-07-12 12:33:23.436778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.487 [2024-07-12 12:33:23.436910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.487 [2024-07-12 12:33:23.436984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.487 [2024-07-12 12:33:23.436985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.422 [2024-07-12 12:33:24.282578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.422 [2024-07-12 12:33:24.299057] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.422 Malloc0 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:58.422 [2024-07-12 12:33:24.363519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66823 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66825 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:58.422 { 00:08:58.422 "params": { 00:08:58.422 "name": "Nvme$subsystem", 00:08:58.422 "trtype": "$TEST_TRANSPORT", 00:08:58.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:58.422 "adrfam": "ipv4", 00:08:58.422 "trsvcid": "$NVMF_PORT", 00:08:58.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:58.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:58.422 "hdgst": ${hdgst:-false}, 00:08:58.422 "ddgst": ${ddgst:-false} 00:08:58.422 }, 00:08:58.422 "method": "bdev_nvme_attach_controller" 00:08:58.422 } 00:08:58.422 EOF 00:08:58.422 )") 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66827 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66829 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:58.422 { 00:08:58.422 "params": { 00:08:58.422 "name": "Nvme$subsystem", 00:08:58.422 "trtype": "$TEST_TRANSPORT", 00:08:58.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:58.422 "adrfam": "ipv4", 00:08:58.422 "trsvcid": "$NVMF_PORT", 00:08:58.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:58.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:58.422 "hdgst": ${hdgst:-false}, 00:08:58.422 "ddgst": ${ddgst:-false} 00:08:58.422 }, 00:08:58.422 "method": "bdev_nvme_attach_controller" 00:08:58.422 } 00:08:58.422 EOF 00:08:58.422 )") 00:08:58.422 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:58.423 { 00:08:58.423 "params": { 00:08:58.423 "name": "Nvme$subsystem", 00:08:58.423 "trtype": "$TEST_TRANSPORT", 00:08:58.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:58.423 "adrfam": "ipv4", 00:08:58.423 "trsvcid": "$NVMF_PORT", 00:08:58.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:58.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:58.423 "hdgst": ${hdgst:-false}, 00:08:58.423 "ddgst": ${ddgst:-false} 00:08:58.423 }, 00:08:58.423 "method": "bdev_nvme_attach_controller" 00:08:58.423 } 00:08:58.423 EOF 00:08:58.423 )") 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:58.423 "params": { 00:08:58.423 "name": "Nvme1", 00:08:58.423 "trtype": "tcp", 00:08:58.423 "traddr": "10.0.0.2", 00:08:58.423 "adrfam": "ipv4", 00:08:58.423 "trsvcid": "4420", 00:08:58.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:58.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:58.423 "hdgst": false, 00:08:58.423 "ddgst": false 00:08:58.423 }, 00:08:58.423 "method": "bdev_nvme_attach_controller" 00:08:58.423 }' 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:58.423 { 00:08:58.423 "params": { 00:08:58.423 "name": "Nvme$subsystem", 00:08:58.423 "trtype": "$TEST_TRANSPORT", 00:08:58.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:58.423 "adrfam": "ipv4", 00:08:58.423 "trsvcid": "$NVMF_PORT", 00:08:58.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:58.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:58.423 "hdgst": ${hdgst:-false}, 00:08:58.423 "ddgst": ${ddgst:-false} 00:08:58.423 }, 00:08:58.423 "method": "bdev_nvme_attach_controller" 00:08:58.423 } 00:08:58.423 EOF 00:08:58.423 )") 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:58.423 "params": { 00:08:58.423 "name": "Nvme1", 00:08:58.423 "trtype": "tcp", 00:08:58.423 "traddr": "10.0.0.2", 00:08:58.423 "adrfam": "ipv4", 00:08:58.423 "trsvcid": "4420", 00:08:58.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:58.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:58.423 "hdgst": false, 00:08:58.423 "ddgst": false 00:08:58.423 }, 00:08:58.423 "method": "bdev_nvme_attach_controller" 00:08:58.423 }' 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:58.423 "params": { 00:08:58.423 "name": "Nvme1", 00:08:58.423 "trtype": "tcp", 00:08:58.423 "traddr": "10.0.0.2", 00:08:58.423 "adrfam": "ipv4", 00:08:58.423 "trsvcid": "4420", 00:08:58.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:58.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:58.423 "hdgst": false, 00:08:58.423 "ddgst": false 00:08:58.423 }, 00:08:58.423 "method": "bdev_nvme_attach_controller" 00:08:58.423 }' 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:58.423 "params": { 00:08:58.423 "name": "Nvme1", 00:08:58.423 "trtype": "tcp", 00:08:58.423 "traddr": "10.0.0.2", 00:08:58.423 "adrfam": "ipv4", 00:08:58.423 "trsvcid": "4420", 00:08:58.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:58.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:58.423 "hdgst": false, 00:08:58.423 "ddgst": false 00:08:58.423 }, 00:08:58.423 "method": "bdev_nvme_attach_controller" 00:08:58.423 }' 00:08:58.423 12:33:24 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66823 00:08:58.423 [2024-07-12 12:33:24.429284] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:08:58.423 [2024-07-12 12:33:24.429538] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:58.423 [2024-07-12 12:33:24.445156] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:08:58.423 [2024-07-12 12:33:24.445568] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:58.423 [2024-07-12 12:33:24.453376] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:08:58.423 [2024-07-12 12:33:24.453714] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:58.423 [2024-07-12 12:33:24.466081] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:08:58.423 [2024-07-12 12:33:24.466984] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:58.681 [2024-07-12 12:33:24.641171] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.681 [2024-07-12 12:33:24.717425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.681 [2024-07-12 12:33:24.744027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:58.939 [2024-07-12 12:33:24.797988] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.939 [2024-07-12 12:33:24.819775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:58.939 [2024-07-12 12:33:24.830249] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:58.939 [2024-07-12 12:33:24.887053] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:58.939 [2024-07-12 12:33:24.893818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.939 [2024-07-12 12:33:24.901060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:58.939 Running I/O for 1 seconds... 00:08:58.939 [2024-07-12 12:33:24.966682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:58.939 Running I/O for 1 seconds... 00:08:59.198 [2024-07-12 12:33:25.021087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:59.198 Running I/O for 1 seconds... 00:08:59.198 [2024-07-12 12:33:25.085057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:08:59.198 Running I/O for 1 seconds... 00:09:00.132 00:09:00.132 Latency(us) 00:09:00.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.132 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:00.132 Nvme1n1 : 1.00 167741.24 655.24 0.00 0.00 760.28 340.71 1280.93 00:09:00.132 =================================================================================================================== 00:09:00.132 Total : 167741.24 655.24 0.00 0.00 760.28 340.71 1280.93 00:09:00.132 00:09:00.132 Latency(us) 00:09:00.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.132 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:00.132 Nvme1n1 : 1.01 10100.83 39.46 0.00 0.00 12615.11 7328.12 21090.68 00:09:00.132 =================================================================================================================== 00:09:00.132 Total : 10100.83 39.46 0.00 0.00 12615.11 7328.12 21090.68 00:09:00.132 00:09:00.132 Latency(us) 00:09:00.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.132 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:00.132 Nvme1n1 : 1.01 8293.03 32.39 0.00 0.00 15367.65 7804.74 24546.21 00:09:00.132 =================================================================================================================== 00:09:00.132 Total : 8293.03 32.39 0.00 0.00 15367.65 7804.74 24546.21 00:09:00.132 00:09:00.132 Latency(us) 00:09:00.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.132 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:00.132 Nvme1n1 : 1.01 7978.76 31.17 0.00 0.00 15955.12 9889.98 26810.18 00:09:00.132 =================================================================================================================== 00:09:00.132 Total : 7978.76 31.17 0.00 0.00 15955.12 9889.98 26810.18 00:09:00.390 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66825 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66827 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66829 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:00.648 rmmod nvme_tcp 00:09:00.648 rmmod nvme_fabrics 00:09:00.648 rmmod nvme_keyring 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 66788 ']' 00:09:00.648 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 66788 00:09:00.649 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 66788 ']' 00:09:00.649 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 66788 00:09:00.649 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:00.649 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:00.649 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66788 00:09:00.649 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:00.649 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:00.649 killing process with pid 66788 00:09:00.649 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66788' 00:09:00.649 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 66788 00:09:00.649 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 66788 00:09:00.906 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:00.906 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:00.906 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:00.906 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.906 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:00.906 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.906 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.906 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.906 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:00.906 00:09:00.906 real 0m4.300s 00:09:00.906 user 0m18.590s 00:09:00.906 sys 0m2.497s 00:09:00.906 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.906 12:33:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:00.906 ************************************ 00:09:00.907 END TEST nvmf_bdev_io_wait 00:09:00.907 ************************************ 00:09:00.907 12:33:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:00.907 12:33:26 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:00.907 12:33:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:00.907 12:33:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.907 12:33:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:00.907 ************************************ 00:09:00.907 START TEST nvmf_queue_depth 00:09:00.907 ************************************ 00:09:00.907 12:33:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:01.165 * Looking for test storage... 00:09:01.165 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:01.165 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:01.166 Cannot find device "nvmf_tgt_br" 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:01.166 Cannot find device "nvmf_tgt_br2" 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:01.166 Cannot find device "nvmf_tgt_br" 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:01.166 Cannot find device "nvmf_tgt_br2" 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:01.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:01.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:01.166 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:01.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:01.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:09:01.424 00:09:01.424 --- 10.0.0.2 ping statistics --- 00:09:01.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.424 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:01.424 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:01.424 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:09:01.424 00:09:01.424 --- 10.0.0.3 ping statistics --- 00:09:01.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.424 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:01.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:01.424 00:09:01.424 --- 10.0.0.1 ping statistics --- 00:09:01.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.424 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:01.424 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=67068 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 67068 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 67068 ']' 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.425 12:33:27 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.425 [2024-07-12 12:33:27.455242] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:09:01.425 [2024-07-12 12:33:27.455991] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.683 [2024-07-12 12:33:27.592327] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.683 [2024-07-12 12:33:27.706241] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.683 [2024-07-12 12:33:27.706301] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.683 [2024-07-12 12:33:27.706313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.683 [2024-07-12 12:33:27.706321] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.683 [2024-07-12 12:33:27.706329] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.683 [2024-07-12 12:33:27.706359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.943 [2024-07-12 12:33:27.764656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.510 [2024-07-12 12:33:28.516376] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.510 Malloc0 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.510 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.768 [2024-07-12 12:33:28.584361] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.768 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.768 12:33:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67100 00:09:02.768 12:33:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:02.768 12:33:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:02.768 12:33:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67100 /var/tmp/bdevperf.sock 00:09:02.768 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 67100 ']' 00:09:02.768 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:02.768 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:02.768 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:02.768 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.768 12:33:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:02.768 [2024-07-12 12:33:28.648041] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:09:02.768 [2024-07-12 12:33:28.648151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67100 ] 00:09:02.768 [2024-07-12 12:33:28.790623] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.026 [2024-07-12 12:33:28.915434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.026 [2024-07-12 12:33:28.988570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:03.592 12:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.592 12:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:03.592 12:33:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:03.592 12:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.592 12:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.592 NVMe0n1 00:09:03.592 12:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.592 12:33:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:03.851 Running I/O for 10 seconds... 00:09:13.826 00:09:13.826 Latency(us) 00:09:13.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.826 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:13.826 Verification LBA range: start 0x0 length 0x4000 00:09:13.826 NVMe0n1 : 10.09 7723.12 30.17 0.00 0.00 131955.76 28597.53 103904.35 00:09:13.826 =================================================================================================================== 00:09:13.826 Total : 7723.12 30.17 0.00 0.00 131955.76 28597.53 103904.35 00:09:13.826 0 00:09:14.085 12:33:39 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67100 00:09:14.085 12:33:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 67100 ']' 00:09:14.085 12:33:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 67100 00:09:14.085 12:33:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:14.085 12:33:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:14.085 12:33:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67100 00:09:14.085 12:33:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:14.085 killing process with pid 67100 00:09:14.085 Received shutdown signal, test time was about 10.000000 seconds 00:09:14.085 00:09:14.085 Latency(us) 00:09:14.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.085 =================================================================================================================== 00:09:14.085 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:14.085 12:33:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:14.085 12:33:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67100' 00:09:14.085 12:33:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 67100 00:09:14.085 12:33:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 67100 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:14.343 rmmod nvme_tcp 00:09:14.343 rmmod nvme_fabrics 00:09:14.343 rmmod nvme_keyring 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 67068 ']' 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 67068 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 67068 ']' 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 67068 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67068 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:14.343 killing process with pid 67068 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67068' 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 67068 00:09:14.343 12:33:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 67068 00:09:14.609 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:14.609 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:14.609 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:14.609 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:14.609 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:14.609 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.609 12:33:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:14.609 12:33:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.609 12:33:40 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:14.609 00:09:14.609 real 0m13.669s 00:09:14.609 user 0m23.598s 00:09:14.609 sys 0m2.298s 00:09:14.609 12:33:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.609 ************************************ 00:09:14.609 END TEST nvmf_queue_depth 00:09:14.609 ************************************ 00:09:14.609 12:33:40 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:14.878 12:33:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:14.879 12:33:40 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:14.879 12:33:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:14.879 12:33:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.879 12:33:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:14.879 ************************************ 00:09:14.879 START TEST nvmf_target_multipath 00:09:14.879 ************************************ 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:14.879 * Looking for test storage... 00:09:14.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:14.879 Cannot find device "nvmf_tgt_br" 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:14.879 Cannot find device "nvmf_tgt_br2" 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:14.879 Cannot find device "nvmf_tgt_br" 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:14.879 Cannot find device "nvmf_tgt_br2" 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:14.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:14.879 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:14.879 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:15.158 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:15.158 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:15.158 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:15.158 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:15.158 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:15.158 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:15.158 12:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:15.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:09:15.158 00:09:15.158 --- 10.0.0.2 ping statistics --- 00:09:15.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.158 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:15.158 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:15.158 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:09:15.158 00:09:15.158 --- 10.0.0.3 ping statistics --- 00:09:15.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.158 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:15.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:15.158 00:09:15.158 --- 10.0.0.1 ping statistics --- 00:09:15.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.158 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=67422 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 67422 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@829 -- # '[' -z 67422 ']' 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.158 12:33:41 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:15.158 [2024-07-12 12:33:41.186289] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:09:15.158 [2024-07-12 12:33:41.186398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.417 [2024-07-12 12:33:41.326811] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.417 [2024-07-12 12:33:41.456900] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.417 [2024-07-12 12:33:41.456968] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.417 [2024-07-12 12:33:41.456982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.417 [2024-07-12 12:33:41.456993] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.417 [2024-07-12 12:33:41.457003] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.417 [2024-07-12 12:33:41.457112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.417 [2024-07-12 12:33:41.457534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.417 [2024-07-12 12:33:41.458161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.417 [2024-07-12 12:33:41.458196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.675 [2024-07-12 12:33:41.515148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:16.241 12:33:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.241 12:33:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@862 -- # return 0 00:09:16.241 12:33:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:16.241 12:33:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:16.241 12:33:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:16.241 12:33:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.241 12:33:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:16.499 [2024-07-12 12:33:42.459097] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:16.499 12:33:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:16.757 Malloc0 00:09:16.757 12:33:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:17.015 12:33:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:17.272 12:33:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.530 [2024-07-12 12:33:43.480016] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.531 12:33:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:17.789 [2024-07-12 12:33:43.704207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:17.789 12:33:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid=16360ad5-8c23-4d49-afe0-9a35c426fec5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:17.789 12:33:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid=16360ad5-8c23-4d49-afe0-9a35c426fec5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:18.046 12:33:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.046 12:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:18.046 12:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.046 12:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:18.046 12:33:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:19.962 12:33:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:19.962 12:33:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:19.962 12:33:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:19.962 12:33:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:19.962 12:33:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:19.962 12:33:45 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:19.962 12:33:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:19.962 12:33:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67513 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:19.962 12:33:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:19.962 [global] 00:09:19.962 thread=1 00:09:19.962 invalidate=1 00:09:19.962 rw=randrw 00:09:19.962 time_based=1 00:09:19.962 runtime=6 00:09:19.962 ioengine=libaio 00:09:19.962 direct=1 00:09:19.962 bs=4096 00:09:19.962 iodepth=128 00:09:19.962 norandommap=0 00:09:19.962 numjobs=1 00:09:19.962 00:09:19.962 verify_dump=1 00:09:19.962 verify_backlog=512 00:09:19.962 verify_state_save=0 00:09:19.962 do_verify=1 00:09:19.962 verify=crc32c-intel 00:09:19.962 [job0] 00:09:19.962 filename=/dev/nvme0n1 00:09:20.220 Could not set queue depth (nvme0n1) 00:09:20.220 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:20.220 fio-3.35 00:09:20.220 Starting 1 thread 00:09:21.149 12:33:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:21.406 12:33:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:21.662 12:33:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:21.662 12:33:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:21.662 12:33:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:21.662 12:33:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:21.662 12:33:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:21.662 12:33:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:21.662 12:33:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:21.662 12:33:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:21.662 12:33:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:21.662 12:33:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:21.662 12:33:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:21.662 12:33:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:21.662 12:33:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:21.919 12:33:47 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:22.200 12:33:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:22.200 12:33:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:22.200 12:33:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:22.200 12:33:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:22.200 12:33:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:22.200 12:33:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:22.200 12:33:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:22.200 12:33:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:22.200 12:33:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:22.200 12:33:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:22.200 12:33:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:22.200 12:33:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:22.200 12:33:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67513 00:09:26.377 00:09:26.377 job0: (groupid=0, jobs=1): err= 0: pid=67534: Fri Jul 12 12:33:52 2024 00:09:26.377 read: IOPS=10.3k, BW=40.3MiB/s (42.2MB/s)(242MiB/6007msec) 00:09:26.377 slat (usec): min=5, max=5277, avg=54.40, stdev=211.64 00:09:26.377 clat (usec): min=1430, max=15197, avg=8308.46, stdev=1436.54 00:09:26.377 lat (usec): min=1444, max=15209, avg=8362.86, stdev=1440.67 00:09:26.377 clat percentiles (usec): 00:09:26.377 | 1.00th=[ 4424], 5.00th=[ 6259], 10.00th=[ 7111], 20.00th=[ 7570], 00:09:26.377 | 30.00th=[ 7832], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8356], 00:09:26.377 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[11863], 00:09:26.377 | 99.00th=[13042], 99.50th=[13304], 99.90th=[14091], 99.95th=[14222], 00:09:26.377 | 99.99th=[15139] 00:09:26.377 bw ( KiB/s): min= 5984, max=28712, per=54.12%, avg=22306.55, stdev=6708.30, samples=11 00:09:26.377 iops : min= 1496, max= 7178, avg=5576.55, stdev=1677.03, samples=11 00:09:26.377 write: IOPS=6359, BW=24.8MiB/s (26.0MB/s)(134MiB/5378msec); 0 zone resets 00:09:26.377 slat (usec): min=13, max=2188, avg=66.14, stdev=143.12 00:09:26.377 clat (usec): min=1032, max=14398, avg=7241.35, stdev=1238.75 00:09:26.377 lat (usec): min=1085, max=14429, avg=7307.48, stdev=1243.25 00:09:26.377 clat percentiles (usec): 00:09:26.377 | 1.00th=[ 3458], 5.00th=[ 4424], 10.00th=[ 5800], 20.00th=[ 6783], 00:09:26.377 | 30.00th=[ 7046], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7570], 00:09:26.377 | 70.00th=[ 7767], 80.00th=[ 7963], 90.00th=[ 8225], 95.00th=[ 8586], 00:09:26.377 | 99.00th=[11207], 99.50th=[11600], 99.90th=[12780], 99.95th=[13173], 00:09:26.377 | 99.99th=[13698] 00:09:26.377 bw ( KiB/s): min= 6440, max=28152, per=87.70%, avg=22307.91, stdev=6370.23, samples=11 00:09:26.377 iops : min= 1610, max= 7038, avg=5576.91, stdev=1592.53, samples=11 00:09:26.377 lat (msec) : 2=0.03%, 4=1.27%, 10=93.29%, 20=5.42% 00:09:26.377 cpu : usr=6.09%, sys=24.94%, ctx=5631, majf=0, minf=116 00:09:26.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:26.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.377 issued rwts: total=61899,34200,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.377 00:09:26.377 Run status group 0 (all jobs): 00:09:26.377 READ: bw=40.3MiB/s (42.2MB/s), 40.3MiB/s-40.3MiB/s (42.2MB/s-42.2MB/s), io=242MiB (254MB), run=6007-6007msec 00:09:26.377 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=134MiB (140MB), run=5378-5378msec 00:09:26.377 00:09:26.377 Disk stats (read/write): 00:09:26.377 nvme0n1: ios=61200/33314, merge=0/0, ticks=486070/225257, in_queue=711327, util=98.65% 00:09:26.377 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:26.634 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:09:26.891 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:09:26.891 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:26.891 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:26.891 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:26.891 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:26.891 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:26.891 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:09:26.891 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:26.891 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:26.891 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:26.891 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:26.891 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:26.891 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:09:26.891 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67613 00:09:26.891 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:26.891 12:33:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:09:26.891 [global] 00:09:26.891 thread=1 00:09:26.891 invalidate=1 00:09:26.891 rw=randrw 00:09:26.891 time_based=1 00:09:26.891 runtime=6 00:09:26.891 ioengine=libaio 00:09:26.891 direct=1 00:09:26.891 bs=4096 00:09:26.891 iodepth=128 00:09:26.891 norandommap=0 00:09:26.891 numjobs=1 00:09:26.891 00:09:26.891 verify_dump=1 00:09:26.891 verify_backlog=512 00:09:26.891 verify_state_save=0 00:09:26.891 do_verify=1 00:09:26.891 verify=crc32c-intel 00:09:26.891 [job0] 00:09:26.891 filename=/dev/nvme0n1 00:09:26.891 Could not set queue depth (nvme0n1) 00:09:26.891 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.891 fio-3.35 00:09:26.891 Starting 1 thread 00:09:27.819 12:33:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:28.077 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:28.334 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:09:28.334 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:28.334 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:28.334 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:28.334 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:28.334 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:28.334 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:09:28.334 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:28.334 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:28.334 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:28.334 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:28.334 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:28.334 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:28.592 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:28.850 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:28.850 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:28.850 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:28.850 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:28.850 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:28.850 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:28.850 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:28.850 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:28.850 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:28.850 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:28.850 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:28.850 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:28.850 12:33:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67613 00:09:34.108 00:09:34.108 job0: (groupid=0, jobs=1): err= 0: pid=67634: Fri Jul 12 12:33:59 2024 00:09:34.108 read: IOPS=11.3k, BW=44.2MiB/s (46.3MB/s)(265MiB/6007msec) 00:09:34.108 slat (usec): min=3, max=7793, avg=43.41, stdev=185.55 00:09:34.108 clat (usec): min=309, max=17884, avg=7725.11, stdev=2117.37 00:09:34.108 lat (usec): min=336, max=17957, avg=7768.53, stdev=2131.28 00:09:34.108 clat percentiles (usec): 00:09:34.108 | 1.00th=[ 1942], 5.00th=[ 4424], 10.00th=[ 5080], 20.00th=[ 5866], 00:09:34.108 | 30.00th=[ 6915], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8225], 00:09:34.108 | 70.00th=[ 8455], 80.00th=[ 8979], 90.00th=[10028], 95.00th=[11600], 00:09:34.108 | 99.00th=[13173], 99.50th=[14091], 99.90th=[16319], 99.95th=[16909], 00:09:34.108 | 99.99th=[17695] 00:09:34.108 bw ( KiB/s): min= 5784, max=41928, per=52.86%, avg=23907.64, stdev=9998.66, samples=11 00:09:34.108 iops : min= 1446, max=10482, avg=5976.91, stdev=2499.66, samples=11 00:09:34.108 write: IOPS=6760, BW=26.4MiB/s (27.7MB/s)(140MiB/5311msec); 0 zone resets 00:09:34.108 slat (usec): min=5, max=7345, avg=60.38, stdev=126.72 00:09:34.108 clat (usec): min=461, max=17128, avg=6619.53, stdev=1943.78 00:09:34.108 lat (usec): min=525, max=17216, avg=6679.91, stdev=1956.41 00:09:34.108 clat percentiles (usec): 00:09:34.108 | 1.00th=[ 2212], 5.00th=[ 3490], 10.00th=[ 3982], 20.00th=[ 4621], 00:09:34.108 | 30.00th=[ 5276], 40.00th=[ 6718], 50.00th=[ 7111], 60.00th=[ 7439], 00:09:34.108 | 70.00th=[ 7701], 80.00th=[ 8029], 90.00th=[ 8586], 95.00th=[ 9372], 00:09:34.108 | 99.00th=[11338], 99.50th=[12256], 99.90th=[14484], 99.95th=[15008], 00:09:34.108 | 99.99th=[16581] 00:09:34.108 bw ( KiB/s): min= 6088, max=40960, per=88.47%, avg=23924.36, stdev=9806.84, samples=11 00:09:34.108 iops : min= 1522, max=10240, avg=5981.09, stdev=2451.71, samples=11 00:09:34.108 lat (usec) : 500=0.03%, 750=0.04%, 1000=0.11% 00:09:34.108 lat (msec) : 2=0.83%, 4=4.75%, 10=86.55%, 20=7.70% 00:09:34.108 cpu : usr=6.61%, sys=29.89%, ctx=5941, majf=0, minf=133 00:09:34.108 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:34.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.108 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.108 issued rwts: total=67925,35905,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.108 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.108 00:09:34.108 Run status group 0 (all jobs): 00:09:34.108 READ: bw=44.2MiB/s (46.3MB/s), 44.2MiB/s-44.2MiB/s (46.3MB/s-46.3MB/s), io=265MiB (278MB), run=6007-6007msec 00:09:34.108 WRITE: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=140MiB (147MB), run=5311-5311msec 00:09:34.108 00:09:34.108 Disk stats (read/write): 00:09:34.108 nvme0n1: ios=67023/35277, merge=0/0, ticks=480523/209004, in_queue=689527, util=98.63% 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:34.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:34.108 rmmod nvme_tcp 00:09:34.108 rmmod nvme_fabrics 00:09:34.108 rmmod nvme_keyring 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 67422 ']' 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 67422 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@948 -- # '[' -z 67422 ']' 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # kill -0 67422 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # uname 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67422 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:34.108 killing process with pid 67422 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67422' 00:09:34.108 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@967 -- # kill 67422 00:09:34.109 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@972 -- # wait 67422 00:09:34.109 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:34.109 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:34.109 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:34.109 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:34.109 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:34.109 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.109 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.109 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.109 12:33:59 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:34.109 00:09:34.109 real 0m19.200s 00:09:34.109 user 1m12.076s 00:09:34.109 sys 0m10.146s 00:09:34.109 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.109 12:33:59 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:34.109 ************************************ 00:09:34.109 END TEST nvmf_target_multipath 00:09:34.109 ************************************ 00:09:34.109 12:33:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:34.109 12:33:59 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:34.109 12:33:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:34.109 12:33:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.109 12:33:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:34.109 ************************************ 00:09:34.109 START TEST nvmf_zcopy 00:09:34.109 ************************************ 00:09:34.109 12:33:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:34.109 * Looking for test storage... 00:09:34.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:34.109 Cannot find device "nvmf_tgt_br" 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:34.109 Cannot find device "nvmf_tgt_br2" 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:34.109 Cannot find device "nvmf_tgt_br" 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:34.109 Cannot find device "nvmf_tgt_br2" 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:34.109 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:34.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:34.367 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:34.367 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:34.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:34.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:09:34.368 00:09:34.368 --- 10.0.0.2 ping statistics --- 00:09:34.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.368 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:34.368 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:34.368 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:09:34.368 00:09:34.368 --- 10.0.0.3 ping statistics --- 00:09:34.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.368 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:34.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:34.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:09:34.368 00:09:34.368 --- 10.0.0.1 ping statistics --- 00:09:34.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:34.368 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:34.368 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:34.626 12:34:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:34.626 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:34.626 12:34:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:34.626 12:34:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.626 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=67882 00:09:34.626 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:34.626 12:34:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 67882 00:09:34.626 12:34:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 67882 ']' 00:09:34.626 12:34:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.626 12:34:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:34.626 12:34:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.626 12:34:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:34.626 12:34:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.626 [2024-07-12 12:34:00.502629] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:09:34.626 [2024-07-12 12:34:00.502735] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:34.626 [2024-07-12 12:34:00.639551] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.884 [2024-07-12 12:34:00.792225] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:34.885 [2024-07-12 12:34:00.792307] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:34.885 [2024-07-12 12:34:00.792320] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:34.885 [2024-07-12 12:34:00.792329] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:34.885 [2024-07-12 12:34:00.792336] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:34.885 [2024-07-12 12:34:00.792367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.885 [2024-07-12 12:34:00.848011] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.818 [2024-07-12 12:34:01.589911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.818 [2024-07-12 12:34:01.606088] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.818 malloc0 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:35.818 { 00:09:35.818 "params": { 00:09:35.818 "name": "Nvme$subsystem", 00:09:35.818 "trtype": "$TEST_TRANSPORT", 00:09:35.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:35.818 "adrfam": "ipv4", 00:09:35.818 "trsvcid": "$NVMF_PORT", 00:09:35.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:35.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:35.818 "hdgst": ${hdgst:-false}, 00:09:35.818 "ddgst": ${ddgst:-false} 00:09:35.818 }, 00:09:35.818 "method": "bdev_nvme_attach_controller" 00:09:35.818 } 00:09:35.818 EOF 00:09:35.818 )") 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:35.818 12:34:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:35.818 "params": { 00:09:35.818 "name": "Nvme1", 00:09:35.818 "trtype": "tcp", 00:09:35.818 "traddr": "10.0.0.2", 00:09:35.818 "adrfam": "ipv4", 00:09:35.818 "trsvcid": "4420", 00:09:35.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:35.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:35.818 "hdgst": false, 00:09:35.818 "ddgst": false 00:09:35.818 }, 00:09:35.818 "method": "bdev_nvme_attach_controller" 00:09:35.818 }' 00:09:35.818 [2024-07-12 12:34:01.708956] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:09:35.818 [2024-07-12 12:34:01.709103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67921 ] 00:09:35.818 [2024-07-12 12:34:01.861349] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.076 [2024-07-12 12:34:02.006300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.076 [2024-07-12 12:34:02.088027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:36.333 Running I/O for 10 seconds... 00:09:46.328 00:09:46.328 Latency(us) 00:09:46.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.328 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:46.328 Verification LBA range: start 0x0 length 0x1000 00:09:46.328 Nvme1n1 : 10.01 5824.60 45.50 0.00 0.00 21900.61 1392.64 36461.85 00:09:46.328 =================================================================================================================== 00:09:46.328 Total : 5824.60 45.50 0.00 0.00 21900.61 1392.64 36461.85 00:09:46.586 12:34:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=68037 00:09:46.586 12:34:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:46.586 12:34:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:46.586 12:34:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:46.586 12:34:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:46.586 12:34:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:46.586 12:34:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:46.586 12:34:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:46.586 12:34:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:46.586 { 00:09:46.586 "params": { 00:09:46.586 "name": "Nvme$subsystem", 00:09:46.586 "trtype": "$TEST_TRANSPORT", 00:09:46.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:46.586 "adrfam": "ipv4", 00:09:46.586 "trsvcid": "$NVMF_PORT", 00:09:46.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:46.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:46.586 "hdgst": ${hdgst:-false}, 00:09:46.586 "ddgst": ${ddgst:-false} 00:09:46.586 }, 00:09:46.586 "method": "bdev_nvme_attach_controller" 00:09:46.586 } 00:09:46.586 EOF 00:09:46.586 )") 00:09:46.586 12:34:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:46.586 [2024-07-12 12:34:12.505578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.586 [2024-07-12 12:34:12.505635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.586 12:34:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:46.587 12:34:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:46.587 12:34:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:46.587 "params": { 00:09:46.587 "name": "Nvme1", 00:09:46.587 "trtype": "tcp", 00:09:46.587 "traddr": "10.0.0.2", 00:09:46.587 "adrfam": "ipv4", 00:09:46.587 "trsvcid": "4420", 00:09:46.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:46.587 "hdgst": false, 00:09:46.587 "ddgst": false 00:09:46.587 }, 00:09:46.587 "method": "bdev_nvme_attach_controller" 00:09:46.587 }' 00:09:46.587 [2024-07-12 12:34:12.517564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.587 [2024-07-12 12:34:12.517611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.587 [2024-07-12 12:34:12.529566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.587 [2024-07-12 12:34:12.529618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.587 [2024-07-12 12:34:12.541563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.587 [2024-07-12 12:34:12.541613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.587 [2024-07-12 12:34:12.544928] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:09:46.587 [2024-07-12 12:34:12.545010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68037 ] 00:09:46.587 [2024-07-12 12:34:12.553563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.587 [2024-07-12 12:34:12.553866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.587 [2024-07-12 12:34:12.565567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.587 [2024-07-12 12:34:12.565798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.587 [2024-07-12 12:34:12.577562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.587 [2024-07-12 12:34:12.577775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.587 [2024-07-12 12:34:12.589559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.587 [2024-07-12 12:34:12.589755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.587 [2024-07-12 12:34:12.601583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.587 [2024-07-12 12:34:12.601859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.587 [2024-07-12 12:34:12.613574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.587 [2024-07-12 12:34:12.613794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.587 [2024-07-12 12:34:12.625592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.587 [2024-07-12 12:34:12.625853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.587 [2024-07-12 12:34:12.637605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.587 [2024-07-12 12:34:12.637916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.587 [2024-07-12 12:34:12.649588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.587 [2024-07-12 12:34:12.649817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.661587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.661790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.673598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.673846] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.679976] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.846 [2024-07-12 12:34:12.685615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.685665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.697614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.697662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.709615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.709664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.721622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.721676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.733627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.733679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.745666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.745723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.757635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.757691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.769626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.769672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.781643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.781699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.793634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.793679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.800963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.846 [2024-07-12 12:34:12.805627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.805661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.817646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.817692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.829651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.829697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.841657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.841709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.853666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.853723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.865667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.865722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.877212] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:09:46.846 [2024-07-12 12:34:12.877660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.877682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.889668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.889720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.901673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.901724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.846 [2024-07-12 12:34:12.913672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.846 [2024-07-12 12:34:12.913718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 [2024-07-12 12:34:12.925667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:12.925715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 [2024-07-12 12:34:12.937698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:12.937758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 [2024-07-12 12:34:12.949711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:12.949772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 [2024-07-12 12:34:12.961722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:12.961783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 [2024-07-12 12:34:12.973724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:12.973787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 [2024-07-12 12:34:12.985735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:12.985791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 [2024-07-12 12:34:12.997752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:12.997816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 Running I/O for 5 seconds... 00:09:47.104 [2024-07-12 12:34:13.014597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:13.014675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 [2024-07-12 12:34:13.031833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:13.031916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 [2024-07-12 12:34:13.046600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:13.046679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 [2024-07-12 12:34:13.062719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:13.062792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 [2024-07-12 12:34:13.080931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:13.081006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 [2024-07-12 12:34:13.095781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:13.095850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 [2024-07-12 12:34:13.111722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:13.111802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 [2024-07-12 12:34:13.129098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:13.129169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 [2024-07-12 12:34:13.145759] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:13.145822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.104 [2024-07-12 12:34:13.163242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.104 [2024-07-12 12:34:13.163305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.383 [2024-07-12 12:34:13.177822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.383 [2024-07-12 12:34:13.177894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.383 [2024-07-12 12:34:13.193534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.383 [2024-07-12 12:34:13.193600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.383 [2024-07-12 12:34:13.211640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.383 [2024-07-12 12:34:13.211708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.383 [2024-07-12 12:34:13.226371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.383 [2024-07-12 12:34:13.226446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.383 [2024-07-12 12:34:13.241913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.383 [2024-07-12 12:34:13.241980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.383 [2024-07-12 12:34:13.260354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.384 [2024-07-12 12:34:13.260447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.384 [2024-07-12 12:34:13.275185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.384 [2024-07-12 12:34:13.275263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.384 [2024-07-12 12:34:13.290837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.384 [2024-07-12 12:34:13.290917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.384 [2024-07-12 12:34:13.308486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.384 [2024-07-12 12:34:13.308559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.384 [2024-07-12 12:34:13.323188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.384 [2024-07-12 12:34:13.323266] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.384 [2024-07-12 12:34:13.339154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.384 [2024-07-12 12:34:13.339230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.384 [2024-07-12 12:34:13.357656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.384 [2024-07-12 12:34:13.357716] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.384 [2024-07-12 12:34:13.372762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.384 [2024-07-12 12:34:13.372815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.384 [2024-07-12 12:34:13.382421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.384 [2024-07-12 12:34:13.382468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.384 [2024-07-12 12:34:13.398473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.384 [2024-07-12 12:34:13.398531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.384 [2024-07-12 12:34:13.414826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.384 [2024-07-12 12:34:13.414885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.384 [2024-07-12 12:34:13.431293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.384 [2024-07-12 12:34:13.431354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.384 [2024-07-12 12:34:13.448668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.384 [2024-07-12 12:34:13.448734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.464958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.465031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.480826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.480894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.490197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.490253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.506518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.506589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.523840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.523912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.539652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.539720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.557118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.557194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.571954] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.572030] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.587157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.587224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.602904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.602980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.620921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.620999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.636084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.636151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.653848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.653925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.668853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.668923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.678780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.678843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.693542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.693605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.642 [2024-07-12 12:34:13.708676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.642 [2024-07-12 12:34:13.708740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.900 [2024-07-12 12:34:13.724488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.900 [2024-07-12 12:34:13.724557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.900 [2024-07-12 12:34:13.740362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.900 [2024-07-12 12:34:13.740450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.900 [2024-07-12 12:34:13.759601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.900 [2024-07-12 12:34:13.759674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.900 [2024-07-12 12:34:13.774232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.900 [2024-07-12 12:34:13.774297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.900 [2024-07-12 12:34:13.789672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.900 [2024-07-12 12:34:13.789735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.900 [2024-07-12 12:34:13.799730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.900 [2024-07-12 12:34:13.799795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.900 [2024-07-12 12:34:13.815949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.900 [2024-07-12 12:34:13.816020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.900 [2024-07-12 12:34:13.832272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.900 [2024-07-12 12:34:13.832337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.900 [2024-07-12 12:34:13.850319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.900 [2024-07-12 12:34:13.850370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.900 [2024-07-12 12:34:13.866466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.900 [2024-07-12 12:34:13.866522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.900 [2024-07-12 12:34:13.883533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.900 [2024-07-12 12:34:13.883603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.900 [2024-07-12 12:34:13.899709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.900 [2024-07-12 12:34:13.899777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.900 [2024-07-12 12:34:13.916643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.901 [2024-07-12 12:34:13.916708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.901 [2024-07-12 12:34:13.933720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.901 [2024-07-12 12:34:13.933783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.901 [2024-07-12 12:34:13.950718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.901 [2024-07-12 12:34:13.950778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.901 [2024-07-12 12:34:13.967144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.901 [2024-07-12 12:34:13.967194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.160 [2024-07-12 12:34:13.985503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.160 [2024-07-12 12:34:13.985552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.160 [2024-07-12 12:34:14.000582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.160 [2024-07-12 12:34:14.000630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.160 [2024-07-12 12:34:14.019216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.160 [2024-07-12 12:34:14.019262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.160 [2024-07-12 12:34:14.034860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.160 [2024-07-12 12:34:14.034904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.160 [2024-07-12 12:34:14.052043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.160 [2024-07-12 12:34:14.052085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.160 [2024-07-12 12:34:14.069093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.160 [2024-07-12 12:34:14.069148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.160 [2024-07-12 12:34:14.085796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.160 [2024-07-12 12:34:14.085844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.160 [2024-07-12 12:34:14.102377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.160 [2024-07-12 12:34:14.102451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.160 [2024-07-12 12:34:14.119564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.160 [2024-07-12 12:34:14.119619] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.160 [2024-07-12 12:34:14.135071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.160 [2024-07-12 12:34:14.135130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.160 [2024-07-12 12:34:14.152703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.160 [2024-07-12 12:34:14.152755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.160 [2024-07-12 12:34:14.167840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.160 [2024-07-12 12:34:14.167888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.160 [2024-07-12 12:34:14.177875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.160 [2024-07-12 12:34:14.177919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.160 [2024-07-12 12:34:14.193850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.160 [2024-07-12 12:34:14.193896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.160 [2024-07-12 12:34:14.211940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.160 [2024-07-12 12:34:14.211981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.160 [2024-07-12 12:34:14.226622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.160 [2024-07-12 12:34:14.226666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.419 [2024-07-12 12:34:14.241929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.419 [2024-07-12 12:34:14.241978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.419 [2024-07-12 12:34:14.258046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.419 [2024-07-12 12:34:14.258103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.419 [2024-07-12 12:34:14.274381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.419 [2024-07-12 12:34:14.274441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.419 [2024-07-12 12:34:14.290835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.419 [2024-07-12 12:34:14.290887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.419 [2024-07-12 12:34:14.308253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.419 [2024-07-12 12:34:14.308301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.419 [2024-07-12 12:34:14.323156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.419 [2024-07-12 12:34:14.323204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.419 [2024-07-12 12:34:14.341044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.419 [2024-07-12 12:34:14.341089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.419 [2024-07-12 12:34:14.355575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.419 [2024-07-12 12:34:14.355620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.419 [2024-07-12 12:34:14.371798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.419 [2024-07-12 12:34:14.371847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.419 [2024-07-12 12:34:14.388235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.419 [2024-07-12 12:34:14.388278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.419 [2024-07-12 12:34:14.405523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.419 [2024-07-12 12:34:14.405562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.419 [2024-07-12 12:34:14.421983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.419 [2024-07-12 12:34:14.422021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.419 [2024-07-12 12:34:14.438576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.419 [2024-07-12 12:34:14.438616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.419 [2024-07-12 12:34:14.454844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.419 [2024-07-12 12:34:14.454884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.419 [2024-07-12 12:34:14.472420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.419 [2024-07-12 12:34:14.472491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.419 [2024-07-12 12:34:14.483142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.419 [2024-07-12 12:34:14.483180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.494054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.494093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.506856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.506896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.516860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.516900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.528629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.528666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.543932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.543983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.554746] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.554831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.569700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.569759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.586415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.586481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.603120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.603172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.613279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.613320] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.628831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.628882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.645792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.645838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.663116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.663169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.679107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.679149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.697180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.697221] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.710850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.710918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.726548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.726587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.678 [2024-07-12 12:34:14.744742] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.678 [2024-07-12 12:34:14.744780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.937 [2024-07-12 12:34:14.759762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.937 [2024-07-12 12:34:14.759812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.937 [2024-07-12 12:34:14.769886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.937 [2024-07-12 12:34:14.769926] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.937 [2024-07-12 12:34:14.786366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.937 [2024-07-12 12:34:14.786422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.937 [2024-07-12 12:34:14.801729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.937 [2024-07-12 12:34:14.801788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.937 [2024-07-12 12:34:14.818918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.937 [2024-07-12 12:34:14.818966] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.937 [2024-07-12 12:34:14.834610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.937 [2024-07-12 12:34:14.834650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.937 [2024-07-12 12:34:14.844051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.937 [2024-07-12 12:34:14.844095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.937 [2024-07-12 12:34:14.858611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.938 [2024-07-12 12:34:14.858651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.938 [2024-07-12 12:34:14.873598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.938 [2024-07-12 12:34:14.873639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.938 [2024-07-12 12:34:14.890115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.938 [2024-07-12 12:34:14.890161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.938 [2024-07-12 12:34:14.907003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.938 [2024-07-12 12:34:14.907049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.938 [2024-07-12 12:34:14.921057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.938 [2024-07-12 12:34:14.921099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.938 [2024-07-12 12:34:14.936596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.938 [2024-07-12 12:34:14.936640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.938 [2024-07-12 12:34:14.954653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.938 [2024-07-12 12:34:14.954692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.938 [2024-07-12 12:34:14.970478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.938 [2024-07-12 12:34:14.970517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.938 [2024-07-12 12:34:14.989130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.938 [2024-07-12 12:34:14.989171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.938 [2024-07-12 12:34:15.004324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.938 [2024-07-12 12:34:15.004381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.196 [2024-07-12 12:34:15.020616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.196 [2024-07-12 12:34:15.020658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.196 [2024-07-12 12:34:15.037229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.196 [2024-07-12 12:34:15.037274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.196 [2024-07-12 12:34:15.053905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.196 [2024-07-12 12:34:15.053947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.196 [2024-07-12 12:34:15.070628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.196 [2024-07-12 12:34:15.070665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.196 [2024-07-12 12:34:15.088069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.196 [2024-07-12 12:34:15.088107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.196 [2024-07-12 12:34:15.103009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.196 [2024-07-12 12:34:15.103064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.197 [2024-07-12 12:34:15.118071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.197 [2024-07-12 12:34:15.118109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.197 [2024-07-12 12:34:15.134239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.197 [2024-07-12 12:34:15.134279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.197 [2024-07-12 12:34:15.152176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.197 [2024-07-12 12:34:15.152215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.197 [2024-07-12 12:34:15.167155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.197 [2024-07-12 12:34:15.167195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.197 [2024-07-12 12:34:15.176337] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.197 [2024-07-12 12:34:15.176430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.197 [2024-07-12 12:34:15.192496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.197 [2024-07-12 12:34:15.192561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.197 [2024-07-12 12:34:15.208994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.197 [2024-07-12 12:34:15.209033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.197 [2024-07-12 12:34:15.226352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.197 [2024-07-12 12:34:15.226392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.197 [2024-07-12 12:34:15.242066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.197 [2024-07-12 12:34:15.242106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.197 [2024-07-12 12:34:15.258302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.197 [2024-07-12 12:34:15.258341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.275673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.275711] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.293067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.293106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.309247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.309285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.326039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.326079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.343617] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.343654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.359370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.359433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.376615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.376655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.393168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.393209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.409890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.409943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.425961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.425999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.444381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.444460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.459132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.459169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.468390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.468440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.484861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.484915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.502163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.502201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.517651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.517685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.455 [2024-07-12 12:34:15.527097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.455 [2024-07-12 12:34:15.527138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.714 [2024-07-12 12:34:15.542947] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.714 [2024-07-12 12:34:15.542984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.714 [2024-07-12 12:34:15.559826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.714 [2024-07-12 12:34:15.559860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.714 [2024-07-12 12:34:15.577369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.714 [2024-07-12 12:34:15.577423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.714 [2024-07-12 12:34:15.592170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.714 [2024-07-12 12:34:15.592205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.714 [2024-07-12 12:34:15.608550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.714 [2024-07-12 12:34:15.608584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.714 [2024-07-12 12:34:15.625317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.714 [2024-07-12 12:34:15.625353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.714 [2024-07-12 12:34:15.642702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.714 [2024-07-12 12:34:15.642741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.714 [2024-07-12 12:34:15.660364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.714 [2024-07-12 12:34:15.660416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.715 [2024-07-12 12:34:15.674832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.715 [2024-07-12 12:34:15.674887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.715 [2024-07-12 12:34:15.692427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.715 [2024-07-12 12:34:15.692489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.715 [2024-07-12 12:34:15.708257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.715 [2024-07-12 12:34:15.708297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.715 [2024-07-12 12:34:15.725980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.715 [2024-07-12 12:34:15.726018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.715 [2024-07-12 12:34:15.740717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.715 [2024-07-12 12:34:15.740754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.715 [2024-07-12 12:34:15.756682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.715 [2024-07-12 12:34:15.756719] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.715 [2024-07-12 12:34:15.773712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.715 [2024-07-12 12:34:15.773767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.973 [2024-07-12 12:34:15.791510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.973 [2024-07-12 12:34:15.791548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.973 [2024-07-12 12:34:15.806478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.973 [2024-07-12 12:34:15.806514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.973 [2024-07-12 12:34:15.822900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.973 [2024-07-12 12:34:15.822938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.973 [2024-07-12 12:34:15.840296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.973 [2024-07-12 12:34:15.840336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.974 [2024-07-12 12:34:15.855825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.974 [2024-07-12 12:34:15.855862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.974 [2024-07-12 12:34:15.865511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.974 [2024-07-12 12:34:15.865547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.974 [2024-07-12 12:34:15.882292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.974 [2024-07-12 12:34:15.882331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.974 [2024-07-12 12:34:15.899358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.974 [2024-07-12 12:34:15.899417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.974 [2024-07-12 12:34:15.917867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.974 [2024-07-12 12:34:15.917907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.974 [2024-07-12 12:34:15.932843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.974 [2024-07-12 12:34:15.932880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.974 [2024-07-12 12:34:15.942498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.974 [2024-07-12 12:34:15.942534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.974 [2024-07-12 12:34:15.958418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.974 [2024-07-12 12:34:15.958497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.974 [2024-07-12 12:34:15.975382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.974 [2024-07-12 12:34:15.975439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.974 [2024-07-12 12:34:15.992147] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.974 [2024-07-12 12:34:15.992193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.974 [2024-07-12 12:34:16.009999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.974 [2024-07-12 12:34:16.010073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.974 [2024-07-12 12:34:16.024430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.974 [2024-07-12 12:34:16.024468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.974 [2024-07-12 12:34:16.041937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.974 [2024-07-12 12:34:16.041991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.233 [2024-07-12 12:34:16.057100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.233 [2024-07-12 12:34:16.057178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.233 [2024-07-12 12:34:16.067600] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.233 [2024-07-12 12:34:16.067652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.233 [2024-07-12 12:34:16.082722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.233 [2024-07-12 12:34:16.082776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.233 [2024-07-12 12:34:16.100209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.233 [2024-07-12 12:34:16.100258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.233 [2024-07-12 12:34:16.116127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.233 [2024-07-12 12:34:16.116172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.233 [2024-07-12 12:34:16.134647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.233 [2024-07-12 12:34:16.134714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.233 [2024-07-12 12:34:16.149280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.233 [2024-07-12 12:34:16.149326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.233 [2024-07-12 12:34:16.166653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.233 [2024-07-12 12:34:16.166698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.233 [2024-07-12 12:34:16.181191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.233 [2024-07-12 12:34:16.181243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.233 [2024-07-12 12:34:16.197449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.233 [2024-07-12 12:34:16.197516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.233 [2024-07-12 12:34:16.215252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.233 [2024-07-12 12:34:16.215313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.233 [2024-07-12 12:34:16.230613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.233 [2024-07-12 12:34:16.230659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.233 [2024-07-12 12:34:16.249766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.233 [2024-07-12 12:34:16.249821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.233 [2024-07-12 12:34:16.263825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.233 [2024-07-12 12:34:16.263871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.233 [2024-07-12 12:34:16.279522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.233 [2024-07-12 12:34:16.279565] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.233 [2024-07-12 12:34:16.297842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.233 [2024-07-12 12:34:16.297882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.491 [2024-07-12 12:34:16.311804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.491 [2024-07-12 12:34:16.311857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.491 [2024-07-12 12:34:16.327616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.491 [2024-07-12 12:34:16.327655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.491 [2024-07-12 12:34:16.345754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.491 [2024-07-12 12:34:16.345807] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.491 [2024-07-12 12:34:16.360899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.491 [2024-07-12 12:34:16.360936] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.491 [2024-07-12 12:34:16.370843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.491 [2024-07-12 12:34:16.370881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.491 [2024-07-12 12:34:16.386549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.491 [2024-07-12 12:34:16.386612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.492 [2024-07-12 12:34:16.396627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.492 [2024-07-12 12:34:16.396674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.492 [2024-07-12 12:34:16.412006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.492 [2024-07-12 12:34:16.412072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.492 [2024-07-12 12:34:16.427984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.492 [2024-07-12 12:34:16.428043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.492 [2024-07-12 12:34:16.438119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.492 [2024-07-12 12:34:16.438158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.492 [2024-07-12 12:34:16.453836] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.492 [2024-07-12 12:34:16.453877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.492 [2024-07-12 12:34:16.470164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.492 [2024-07-12 12:34:16.470204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.492 [2024-07-12 12:34:16.486062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.492 [2024-07-12 12:34:16.486102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.492 [2024-07-12 12:34:16.503802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.492 [2024-07-12 12:34:16.503854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.492 [2024-07-12 12:34:16.518082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.492 [2024-07-12 12:34:16.518118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.492 [2024-07-12 12:34:16.533190] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.492 [2024-07-12 12:34:16.533226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.492 [2024-07-12 12:34:16.542856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.492 [2024-07-12 12:34:16.542891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.492 [2024-07-12 12:34:16.557953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.492 [2024-07-12 12:34:16.557988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.754 [2024-07-12 12:34:16.568308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.754 [2024-07-12 12:34:16.568349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.754 [2024-07-12 12:34:16.583956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.754 [2024-07-12 12:34:16.583992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.754 [2024-07-12 12:34:16.599323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.754 [2024-07-12 12:34:16.599362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.754 [2024-07-12 12:34:16.609672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.754 [2024-07-12 12:34:16.609708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.754 [2024-07-12 12:34:16.623454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.754 [2024-07-12 12:34:16.623492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.754 [2024-07-12 12:34:16.638614] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.754 [2024-07-12 12:34:16.638643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.754 [2024-07-12 12:34:16.654264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.754 [2024-07-12 12:34:16.654313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.754 [2024-07-12 12:34:16.670798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.754 [2024-07-12 12:34:16.670842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.754 [2024-07-12 12:34:16.688683] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.754 [2024-07-12 12:34:16.688722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.754 [2024-07-12 12:34:16.703848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.754 [2024-07-12 12:34:16.703886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.754 [2024-07-12 12:34:16.719856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.754 [2024-07-12 12:34:16.719905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.754 [2024-07-12 12:34:16.729369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.754 [2024-07-12 12:34:16.729448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.754 [2024-07-12 12:34:16.745604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.754 [2024-07-12 12:34:16.745639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.755 [2024-07-12 12:34:16.764345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.755 [2024-07-12 12:34:16.764380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.755 [2024-07-12 12:34:16.779657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.755 [2024-07-12 12:34:16.779698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.755 [2024-07-12 12:34:16.788940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.755 [2024-07-12 12:34:16.789008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.755 [2024-07-12 12:34:16.806001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.755 [2024-07-12 12:34:16.806070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.755 [2024-07-12 12:34:16.822545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.755 [2024-07-12 12:34:16.822603] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.021 [2024-07-12 12:34:16.839975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.021 [2024-07-12 12:34:16.840017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.021 [2024-07-12 12:34:16.856233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.021 [2024-07-12 12:34:16.856274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.021 [2024-07-12 12:34:16.872701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.021 [2024-07-12 12:34:16.872745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.021 [2024-07-12 12:34:16.892302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.021 [2024-07-12 12:34:16.892364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.021 [2024-07-12 12:34:16.907469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.021 [2024-07-12 12:34:16.907511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.021 [2024-07-12 12:34:16.924731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.021 [2024-07-12 12:34:16.924775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.021 [2024-07-12 12:34:16.941708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.021 [2024-07-12 12:34:16.941746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.021 [2024-07-12 12:34:16.959747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.021 [2024-07-12 12:34:16.959798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.021 [2024-07-12 12:34:16.975801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.021 [2024-07-12 12:34:16.975838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.021 [2024-07-12 12:34:16.992676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.021 [2024-07-12 12:34:16.992712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.021 [2024-07-12 12:34:17.008182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.021 [2024-07-12 12:34:17.008217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.021 [2024-07-12 12:34:17.018000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.021 [2024-07-12 12:34:17.018052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.021 [2024-07-12 12:34:17.034606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.021 [2024-07-12 12:34:17.034650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.021 [2024-07-12 12:34:17.052369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.021 [2024-07-12 12:34:17.052470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.021 [2024-07-12 12:34:17.067491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.021 [2024-07-12 12:34:17.067537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.021 [2024-07-12 12:34:17.077523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.021 [2024-07-12 12:34:17.077563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.093489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.093529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.109718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.109757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.119752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.119790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.134067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.134107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.143830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.143867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.160601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.160641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.177795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.177838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.194177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.194218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.211192] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.211227] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.228077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.228113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.244310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.244347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.261493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.261528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.277151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.277206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.294038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.294077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.311350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.311435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.328278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.328317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.344621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.344660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.293 [2024-07-12 12:34:17.361171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.293 [2024-07-12 12:34:17.361209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.552 [2024-07-12 12:34:17.376182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.552 [2024-07-12 12:34:17.376220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.552 [2024-07-12 12:34:17.393587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.552 [2024-07-12 12:34:17.393626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.552 [2024-07-12 12:34:17.408602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.552 [2024-07-12 12:34:17.408641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.552 [2024-07-12 12:34:17.418189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.552 [2024-07-12 12:34:17.418228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.552 [2024-07-12 12:34:17.434460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.552 [2024-07-12 12:34:17.434499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.552 [2024-07-12 12:34:17.446493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.552 [2024-07-12 12:34:17.446532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.552 [2024-07-12 12:34:17.461919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.552 [2024-07-12 12:34:17.461958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.552 [2024-07-12 12:34:17.480809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.552 [2024-07-12 12:34:17.480858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.552 [2024-07-12 12:34:17.495249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.552 [2024-07-12 12:34:17.495290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.552 [2024-07-12 12:34:17.510599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.552 [2024-07-12 12:34:17.510639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.552 [2024-07-12 12:34:17.520336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.552 [2024-07-12 12:34:17.520377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.553 [2024-07-12 12:34:17.536335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.553 [2024-07-12 12:34:17.536382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.553 [2024-07-12 12:34:17.553796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.553 [2024-07-12 12:34:17.553857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.553 [2024-07-12 12:34:17.568857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.553 [2024-07-12 12:34:17.568895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.553 [2024-07-12 12:34:17.584604] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.553 [2024-07-12 12:34:17.584640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.553 [2024-07-12 12:34:17.604237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.553 [2024-07-12 12:34:17.604279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.553 [2024-07-12 12:34:17.618761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.553 [2024-07-12 12:34:17.618803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.636387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.636440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.650712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.650750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.665959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.665998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.684625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.684666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.699361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.699422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.711110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.711148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.726522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.726562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.745740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.745789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.760178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.760226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.770069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.770112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.786135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.786179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.805418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.805465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.820250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.820300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.830427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.830479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.845459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.845520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.861855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.861912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.811 [2024-07-12 12:34:17.880536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.811 [2024-07-12 12:34:17.880592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:17.895445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:17.895499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:17.913022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:17.913081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:17.927779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:17.927834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:17.943863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:17.943931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:17.961142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:17.961209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:17.976736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:17.976801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:17.994066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:17.994145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:18.008457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:18.008517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 00:09:52.069 Latency(us) 00:09:52.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.069 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:52.069 Nvme1n1 : 5.01 11542.64 90.18 0.00 0.00 11073.77 4706.68 19184.17 00:09:52.069 =================================================================================================================== 00:09:52.069 Total : 11542.64 90.18 0.00 0.00 11073.77 4706.68 19184.17 00:09:52.069 [2024-07-12 12:34:18.017955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:18.018007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:18.029950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:18.029999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:18.041962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:18.042015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:18.053968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:18.054020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:18.065975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:18.066027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:18.077983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:18.078047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:18.089982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:18.090035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:18.101976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:18.102028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:18.113977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:18.114028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:18.125993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:18.126049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.069 [2024-07-12 12:34:18.137989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.069 [2024-07-12 12:34:18.138045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.326 [2024-07-12 12:34:18.149998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.326 [2024-07-12 12:34:18.150052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.326 [2024-07-12 12:34:18.162010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.326 [2024-07-12 12:34:18.162076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.326 [2024-07-12 12:34:18.173998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.326 [2024-07-12 12:34:18.174050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.326 [2024-07-12 12:34:18.185998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.326 [2024-07-12 12:34:18.186047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.326 [2024-07-12 12:34:18.198007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.326 [2024-07-12 12:34:18.198058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.326 [2024-07-12 12:34:18.210015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.326 [2024-07-12 12:34:18.210070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.326 [2024-07-12 12:34:18.222007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.326 [2024-07-12 12:34:18.222058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.326 [2024-07-12 12:34:18.234023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.326 [2024-07-12 12:34:18.234080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.326 [2024-07-12 12:34:18.246029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.326 [2024-07-12 12:34:18.246086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.326 [2024-07-12 12:34:18.258021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.326 [2024-07-12 12:34:18.258070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.326 [2024-07-12 12:34:18.270011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.326 [2024-07-12 12:34:18.270053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.326 [2024-07-12 12:34:18.282012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.326 [2024-07-12 12:34:18.282054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.326 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68037) - No such process 00:09:52.326 12:34:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 68037 00:09:52.326 12:34:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.326 12:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.326 12:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.326 12:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.326 12:34:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:52.326 12:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.326 12:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.326 delay0 00:09:52.326 12:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.326 12:34:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:52.326 12:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:52.326 12:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.326 12:34:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:52.326 12:34:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:52.583 [2024-07-12 12:34:18.473489] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:59.136 Initializing NVMe Controllers 00:09:59.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:59.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:59.136 Initialization complete. Launching workers. 00:09:59.136 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 213 00:09:59.136 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 500, failed to submit 33 00:09:59.136 success 375, unsuccess 125, failed 0 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:59.136 rmmod nvme_tcp 00:09:59.136 rmmod nvme_fabrics 00:09:59.136 rmmod nvme_keyring 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 67882 ']' 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 67882 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 67882 ']' 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 67882 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67882 00:09:59.136 killing process with pid 67882 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67882' 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 67882 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 67882 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:59.136 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.137 12:34:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.137 12:34:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.137 12:34:24 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:59.137 ************************************ 00:09:59.137 END TEST nvmf_zcopy 00:09:59.137 ************************************ 00:09:59.137 00:09:59.137 real 0m25.012s 00:09:59.137 user 0m40.768s 00:09:59.137 sys 0m7.027s 00:09:59.137 12:34:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.137 12:34:24 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:59.137 12:34:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:59.137 12:34:24 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:59.137 12:34:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:59.137 12:34:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.137 12:34:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:59.137 ************************************ 00:09:59.137 START TEST nvmf_nmic 00:09:59.137 ************************************ 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:59.137 * Looking for test storage... 00:09:59.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:59.137 Cannot find device "nvmf_tgt_br" 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.137 Cannot find device "nvmf_tgt_br2" 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:59.137 Cannot find device "nvmf_tgt_br" 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:59.137 Cannot find device "nvmf_tgt_br2" 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:59.137 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:59.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:59.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:59.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:09:59.396 00:09:59.396 --- 10.0.0.2 ping statistics --- 00:09:59.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.396 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:59.396 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:59.396 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:09:59.396 00:09:59.396 --- 10.0.0.3 ping statistics --- 00:09:59.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.396 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:59.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:59.396 00:09:59.396 --- 10.0.0.1 ping statistics --- 00:09:59.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.396 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=68360 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 68360 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 68360 ']' 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:59.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:59.396 12:34:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.654 [2024-07-12 12:34:25.492957] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:09:59.654 [2024-07-12 12:34:25.493062] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.654 [2024-07-12 12:34:25.630588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.912 [2024-07-12 12:34:25.751993] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.913 [2024-07-12 12:34:25.752282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.913 [2024-07-12 12:34:25.752461] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.913 [2024-07-12 12:34:25.752591] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.913 [2024-07-12 12:34:25.752628] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.913 [2024-07-12 12:34:25.752880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.913 [2024-07-12 12:34:25.752959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.913 [2024-07-12 12:34:25.753033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.913 [2024-07-12 12:34:25.753033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.913 [2024-07-12 12:34:25.807786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:00.479 [2024-07-12 12:34:26.461290] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:00.479 Malloc0 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:00.479 [2024-07-12 12:34:26.532375] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.479 test case1: single bdev can't be used in multiple subsystems 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.479 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:00.737 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.737 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:00.737 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:00.737 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.737 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:00.737 [2024-07-12 12:34:26.560226] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:00.737 [2024-07-12 12:34:26.560276] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:00.737 [2024-07-12 12:34:26.560289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.737 request: 00:10:00.737 { 00:10:00.737 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:00.737 "namespace": { 00:10:00.737 "bdev_name": "Malloc0", 00:10:00.737 "no_auto_visible": false 00:10:00.737 }, 00:10:00.737 "method": "nvmf_subsystem_add_ns", 00:10:00.737 "req_id": 1 00:10:00.737 } 00:10:00.737 Got JSON-RPC error response 00:10:00.737 response: 00:10:00.737 { 00:10:00.737 "code": -32602, 00:10:00.737 "message": "Invalid parameters" 00:10:00.737 } 00:10:00.737 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:00.737 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:00.737 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:00.737 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:00.737 Adding namespace failed - expected result. 00:10:00.737 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:00.737 test case2: host connect to nvmf target in multiple paths 00:10:00.737 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:00.737 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:00.737 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:00.737 [2024-07-12 12:34:26.576413] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:00.737 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:00.737 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid=16360ad5-8c23-4d49-afe0-9a35c426fec5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:00.737 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid=16360ad5-8c23-4d49-afe0-9a35c426fec5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:00.995 12:34:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.995 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:00.995 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.995 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:00.995 12:34:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:02.894 12:34:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:02.894 12:34:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:02.894 12:34:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:02.894 12:34:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:02.894 12:34:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:02.894 12:34:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:02.894 12:34:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:02.894 [global] 00:10:02.894 thread=1 00:10:02.894 invalidate=1 00:10:02.894 rw=write 00:10:02.894 time_based=1 00:10:02.894 runtime=1 00:10:02.894 ioengine=libaio 00:10:02.894 direct=1 00:10:02.894 bs=4096 00:10:02.894 iodepth=1 00:10:02.894 norandommap=0 00:10:02.894 numjobs=1 00:10:02.894 00:10:02.894 verify_dump=1 00:10:02.894 verify_backlog=512 00:10:02.894 verify_state_save=0 00:10:02.894 do_verify=1 00:10:02.894 verify=crc32c-intel 00:10:02.894 [job0] 00:10:02.894 filename=/dev/nvme0n1 00:10:02.894 Could not set queue depth (nvme0n1) 00:10:03.151 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:03.151 fio-3.35 00:10:03.151 Starting 1 thread 00:10:04.524 00:10:04.524 job0: (groupid=0, jobs=1): err= 0: pid=68447: Fri Jul 12 12:34:30 2024 00:10:04.524 read: IOPS=2989, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1001msec) 00:10:04.524 slat (nsec): min=13086, max=53711, avg=15005.94, stdev=1777.59 00:10:04.524 clat (usec): min=144, max=342, avg=184.66, stdev=24.33 00:10:04.524 lat (usec): min=158, max=356, avg=199.67, stdev=24.43 00:10:04.524 clat percentiles (usec): 00:10:04.524 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:10:04.525 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:10:04.525 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 204], 95.00th=[ 245], 00:10:04.525 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 326], 99.95th=[ 334], 00:10:04.525 | 99.99th=[ 343] 00:10:04.525 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:04.525 slat (usec): min=19, max=150, avg=21.93, stdev= 4.67 00:10:04.525 clat (usec): min=85, max=237, avg=105.35, stdev=10.45 00:10:04.525 lat (usec): min=105, max=387, avg=127.28, stdev=12.82 00:10:04.525 clat percentiles (usec): 00:10:04.525 | 1.00th=[ 88], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 97], 00:10:04.525 | 30.00th=[ 100], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 108], 00:10:04.525 | 70.00th=[ 110], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 124], 00:10:04.525 | 99.00th=[ 139], 99.50th=[ 147], 99.90th=[ 163], 99.95th=[ 184], 00:10:04.525 | 99.99th=[ 237] 00:10:04.525 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:04.525 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:04.525 lat (usec) : 100=15.17%, 250=82.65%, 500=2.18% 00:10:04.525 cpu : usr=2.30%, sys=8.90%, ctx=6064, majf=0, minf=2 00:10:04.525 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.525 issued rwts: total=2992,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.525 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.525 00:10:04.525 Run status group 0 (all jobs): 00:10:04.525 READ: bw=11.7MiB/s (12.2MB/s), 11.7MiB/s-11.7MiB/s (12.2MB/s-12.2MB/s), io=11.7MiB (12.3MB), run=1001-1001msec 00:10:04.525 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:04.525 00:10:04.525 Disk stats (read/write): 00:10:04.525 nvme0n1: ios=2610/2941, merge=0/0, ticks=497/336, in_queue=833, util=91.38% 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:04.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:04.525 rmmod nvme_tcp 00:10:04.525 rmmod nvme_fabrics 00:10:04.525 rmmod nvme_keyring 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 68360 ']' 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 68360 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 68360 ']' 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 68360 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68360 00:10:04.525 killing process with pid 68360 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68360' 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 68360 00:10:04.525 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 68360 00:10:04.783 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:04.783 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:04.783 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:04.783 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:04.783 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:04.783 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.783 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.783 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.783 12:34:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:04.783 00:10:04.783 real 0m5.694s 00:10:04.783 user 0m18.107s 00:10:04.783 sys 0m2.307s 00:10:04.783 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.783 12:34:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.783 ************************************ 00:10:04.783 END TEST nvmf_nmic 00:10:04.783 ************************************ 00:10:04.783 12:34:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:04.783 12:34:30 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:04.783 12:34:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:04.783 12:34:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.783 12:34:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:04.783 ************************************ 00:10:04.783 START TEST nvmf_fio_target 00:10:04.783 ************************************ 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:04.783 * Looking for test storage... 00:10:04.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.783 12:34:30 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:04.784 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:05.042 Cannot find device "nvmf_tgt_br" 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:05.042 Cannot find device "nvmf_tgt_br2" 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:05.042 Cannot find device "nvmf_tgt_br" 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:05.042 Cannot find device "nvmf_tgt_br2" 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:05.042 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:05.042 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:05.042 12:34:30 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:05.042 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:05.042 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:05.042 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:05.042 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:05.042 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:05.300 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:05.300 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:05.300 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:05.300 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:05.300 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:05.300 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:05.300 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:05.300 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:05.300 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:05.300 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:05.300 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:05.300 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:05.300 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:05.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:05.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:10:05.301 00:10:05.301 --- 10.0.0.2 ping statistics --- 00:10:05.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.301 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:05.301 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:05.301 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:10:05.301 00:10:05.301 --- 10.0.0.3 ping statistics --- 00:10:05.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.301 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:05.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:05.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:10:05.301 00:10:05.301 --- 10.0.0.1 ping statistics --- 00:10:05.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:05.301 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=68631 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 68631 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 68631 ']' 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:05.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:05.301 12:34:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.301 [2024-07-12 12:34:31.343675] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:10:05.301 [2024-07-12 12:34:31.343789] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.560 [2024-07-12 12:34:31.477767] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:05.560 [2024-07-12 12:34:31.613081] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.560 [2024-07-12 12:34:31.613451] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.560 [2024-07-12 12:34:31.613647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.560 [2024-07-12 12:34:31.613766] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.560 [2024-07-12 12:34:31.613905] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.560 [2024-07-12 12:34:31.614061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.560 [2024-07-12 12:34:31.614138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.560 [2024-07-12 12:34:31.614196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.560 [2024-07-12 12:34:31.614198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.818 [2024-07-12 12:34:31.669836] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:06.385 12:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:06.385 12:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:10:06.385 12:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:06.385 12:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:06.385 12:34:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.658 12:34:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.658 12:34:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:06.924 [2024-07-12 12:34:32.736863] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.924 12:34:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.182 12:34:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:07.182 12:34:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.438 12:34:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:07.438 12:34:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.694 12:34:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:07.694 12:34:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:07.952 12:34:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:07.952 12:34:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:08.209 12:34:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:08.467 12:34:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:08.467 12:34:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:08.725 12:34:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:08.725 12:34:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:08.982 12:34:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:08.982 12:34:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:09.240 12:34:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:09.498 12:34:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:09.498 12:34:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:09.756 12:34:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:09.756 12:34:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:10.013 12:34:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:10.271 [2024-07-12 12:34:36.135527] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.271 12:34:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:10.530 12:34:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:10.788 12:34:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid=16360ad5-8c23-4d49-afe0-9a35c426fec5 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:10.788 12:34:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:10.788 12:34:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:10.788 12:34:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:10.788 12:34:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:10.788 12:34:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:10.788 12:34:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:13.314 12:34:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:13.314 12:34:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:13.314 12:34:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:13.314 12:34:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:13.314 12:34:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:13.314 12:34:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:13.314 12:34:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:13.314 [global] 00:10:13.314 thread=1 00:10:13.314 invalidate=1 00:10:13.314 rw=write 00:10:13.314 time_based=1 00:10:13.314 runtime=1 00:10:13.314 ioengine=libaio 00:10:13.314 direct=1 00:10:13.314 bs=4096 00:10:13.314 iodepth=1 00:10:13.314 norandommap=0 00:10:13.314 numjobs=1 00:10:13.314 00:10:13.314 verify_dump=1 00:10:13.314 verify_backlog=512 00:10:13.314 verify_state_save=0 00:10:13.314 do_verify=1 00:10:13.314 verify=crc32c-intel 00:10:13.314 [job0] 00:10:13.314 filename=/dev/nvme0n1 00:10:13.314 [job1] 00:10:13.314 filename=/dev/nvme0n2 00:10:13.314 [job2] 00:10:13.314 filename=/dev/nvme0n3 00:10:13.314 [job3] 00:10:13.314 filename=/dev/nvme0n4 00:10:13.314 Could not set queue depth (nvme0n1) 00:10:13.314 Could not set queue depth (nvme0n2) 00:10:13.314 Could not set queue depth (nvme0n3) 00:10:13.314 Could not set queue depth (nvme0n4) 00:10:13.314 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.314 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.314 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.314 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.314 fio-3.35 00:10:13.314 Starting 4 threads 00:10:14.247 00:10:14.247 job0: (groupid=0, jobs=1): err= 0: pid=68816: Fri Jul 12 12:34:40 2024 00:10:14.247 read: IOPS=2634, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:10:14.247 slat (usec): min=12, max=233, avg=15.62, stdev= 5.23 00:10:14.247 clat (usec): min=3, max=8096, avg=183.26, stdev=178.39 00:10:14.247 lat (usec): min=153, max=8109, avg=198.87, stdev=178.56 00:10:14.247 clat percentiles (usec): 00:10:14.247 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:10:14.247 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:10:14.247 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 221], 95.00th=[ 237], 00:10:14.247 | 99.00th=[ 306], 99.50th=[ 330], 99.90th=[ 2573], 99.95th=[ 2802], 00:10:14.247 | 99.99th=[ 8094] 00:10:14.247 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:14.247 slat (usec): min=13, max=131, avg=22.09, stdev= 5.67 00:10:14.247 clat (usec): min=90, max=917, avg=129.22, stdev=36.02 00:10:14.247 lat (usec): min=109, max=941, avg=151.31, stdev=36.61 00:10:14.247 clat percentiles (usec): 00:10:14.247 | 1.00th=[ 96], 5.00th=[ 101], 10.00th=[ 105], 20.00th=[ 111], 00:10:14.247 | 30.00th=[ 115], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 125], 00:10:14.247 | 70.00th=[ 130], 80.00th=[ 137], 90.00th=[ 167], 95.00th=[ 190], 00:10:14.247 | 99.00th=[ 229], 99.50th=[ 243], 99.90th=[ 603], 99.95th=[ 783], 00:10:14.247 | 99.99th=[ 922] 00:10:14.247 bw ( KiB/s): min=12288, max=12288, per=30.81%, avg=12288.00, stdev= 0.00, samples=1 00:10:14.247 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:14.247 lat (usec) : 4=0.02%, 100=2.21%, 250=96.18%, 500=1.42%, 750=0.04% 00:10:14.247 lat (usec) : 1000=0.04% 00:10:14.247 lat (msec) : 2=0.05%, 4=0.04%, 10=0.02% 00:10:14.247 cpu : usr=2.50%, sys=8.40%, ctx=5719, majf=0, minf=7 00:10:14.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.247 issued rwts: total=2637,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.247 job1: (groupid=0, jobs=1): err= 0: pid=68817: Fri Jul 12 12:34:40 2024 00:10:14.247 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:14.247 slat (nsec): min=9749, max=52974, avg=18949.99, stdev=6151.42 00:10:14.247 clat (usec): min=139, max=352, avg=183.78, stdev=33.48 00:10:14.247 lat (usec): min=153, max=368, avg=202.73, stdev=32.38 00:10:14.247 clat percentiles (usec): 00:10:14.247 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:10:14.247 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:10:14.247 | 70.00th=[ 184], 80.00th=[ 196], 90.00th=[ 231], 95.00th=[ 249], 00:10:14.247 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 347], 99.95th=[ 351], 00:10:14.247 | 99.99th=[ 355] 00:10:14.247 write: IOPS=2917, BW=11.4MiB/s (11.9MB/s)(11.4MiB/1001msec); 0 zone resets 00:10:14.247 slat (usec): min=12, max=138, avg=28.78, stdev=10.38 00:10:14.247 clat (usec): min=90, max=2024, avg=131.32, stdev=42.68 00:10:14.247 lat (usec): min=112, max=2048, avg=160.10, stdev=42.70 00:10:14.247 clat percentiles (usec): 00:10:14.247 | 1.00th=[ 100], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 116], 00:10:14.247 | 30.00th=[ 119], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 129], 00:10:14.247 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 165], 95.00th=[ 186], 00:10:14.247 | 99.00th=[ 217], 99.50th=[ 227], 99.90th=[ 334], 99.95th=[ 482], 00:10:14.247 | 99.99th=[ 2024] 00:10:14.247 bw ( KiB/s): min=12288, max=12288, per=30.81%, avg=12288.00, stdev= 0.00, samples=1 00:10:14.247 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:14.247 lat (usec) : 100=0.57%, 250=97.04%, 500=2.37% 00:10:14.247 lat (msec) : 4=0.02% 00:10:14.247 cpu : usr=2.40%, sys=11.10%, ctx=5481, majf=0, minf=12 00:10:14.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.247 issued rwts: total=2560,2920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.247 job2: (groupid=0, jobs=1): err= 0: pid=68818: Fri Jul 12 12:34:40 2024 00:10:14.247 read: IOPS=1638, BW=6553KiB/s (6711kB/s)(6560KiB/1001msec) 00:10:14.247 slat (usec): min=15, max=491, avg=22.80, stdev=12.79 00:10:14.247 clat (usec): min=181, max=3176, avg=305.11, stdev=115.34 00:10:14.247 lat (usec): min=198, max=3215, avg=327.91, stdev=116.89 00:10:14.247 clat percentiles (usec): 00:10:14.247 | 1.00th=[ 202], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 265], 00:10:14.247 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:10:14.247 | 70.00th=[ 302], 80.00th=[ 314], 90.00th=[ 355], 95.00th=[ 494], 00:10:14.247 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 2802], 99.95th=[ 3163], 00:10:14.247 | 99.99th=[ 3163] 00:10:14.247 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:14.247 slat (usec): min=19, max=104, avg=30.90, stdev= 8.04 00:10:14.247 clat (usec): min=103, max=479, avg=190.12, stdev=34.61 00:10:14.247 lat (usec): min=125, max=512, avg=221.02, stdev=34.94 00:10:14.247 clat percentiles (usec): 00:10:14.247 | 1.00th=[ 116], 5.00th=[ 124], 10.00th=[ 133], 20.00th=[ 161], 00:10:14.247 | 30.00th=[ 182], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:10:14.247 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 237], 00:10:14.247 | 99.00th=[ 262], 99.50th=[ 281], 99.90th=[ 343], 99.95th=[ 371], 00:10:14.247 | 99.99th=[ 478] 00:10:14.247 bw ( KiB/s): min= 8192, max= 8192, per=20.54%, avg=8192.00, stdev= 0.00, samples=1 00:10:14.247 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:14.247 lat (usec) : 250=57.02%, 500=41.27%, 750=1.60%, 1000=0.05% 00:10:14.247 lat (msec) : 4=0.05% 00:10:14.247 cpu : usr=1.80%, sys=8.10%, ctx=3689, majf=0, minf=7 00:10:14.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.247 issued rwts: total=1640,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.247 job3: (groupid=0, jobs=1): err= 0: pid=68819: Fri Jul 12 12:34:40 2024 00:10:14.247 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:14.247 slat (nsec): min=13773, max=54108, avg=24727.70, stdev=7455.26 00:10:14.247 clat (usec): min=165, max=1293, avg=303.37, stdev=63.03 00:10:14.247 lat (usec): min=195, max=1330, avg=328.10, stdev=65.96 00:10:14.247 clat percentiles (usec): 00:10:14.247 | 1.00th=[ 190], 5.00th=[ 249], 10.00th=[ 262], 20.00th=[ 269], 00:10:14.247 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:10:14.247 | 70.00th=[ 302], 80.00th=[ 334], 90.00th=[ 383], 95.00th=[ 408], 00:10:14.247 | 99.00th=[ 529], 99.50th=[ 603], 99.90th=[ 766], 99.95th=[ 1287], 00:10:14.247 | 99.99th=[ 1287] 00:10:14.247 write: IOPS=1940, BW=7760KiB/s (7946kB/s)(7768KiB/1001msec); 0 zone resets 00:10:14.247 slat (usec): min=19, max=121, avg=30.86, stdev=10.62 00:10:14.247 clat (usec): min=109, max=530, avg=219.63, stdev=56.60 00:10:14.247 lat (usec): min=134, max=593, avg=250.49, stdev=63.18 00:10:14.247 clat percentiles (usec): 00:10:14.247 | 1.00th=[ 122], 5.00th=[ 133], 10.00th=[ 145], 20.00th=[ 192], 00:10:14.247 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:10:14.247 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 322], 95.00th=[ 343], 00:10:14.247 | 99.00th=[ 379], 99.50th=[ 396], 99.90th=[ 433], 99.95th=[ 529], 00:10:14.247 | 99.99th=[ 529] 00:10:14.247 bw ( KiB/s): min= 8192, max= 8192, per=20.54%, avg=8192.00, stdev= 0.00, samples=1 00:10:14.247 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:14.247 lat (usec) : 250=49.40%, 500=50.06%, 750=0.49%, 1000=0.03% 00:10:14.247 lat (msec) : 2=0.03% 00:10:14.247 cpu : usr=1.90%, sys=7.80%, ctx=3478, majf=0, minf=9 00:10:14.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.247 issued rwts: total=1536,1942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.247 00:10:14.247 Run status group 0 (all jobs): 00:10:14.247 READ: bw=32.7MiB/s (34.3MB/s), 6138KiB/s-10.3MiB/s (6285kB/s-10.8MB/s), io=32.7MiB (34.3MB), run=1001-1001msec 00:10:14.247 WRITE: bw=39.0MiB/s (40.8MB/s), 7760KiB/s-12.0MiB/s (7946kB/s-12.6MB/s), io=39.0MiB (40.9MB), run=1001-1001msec 00:10:14.247 00:10:14.247 Disk stats (read/write): 00:10:14.247 nvme0n1: ios=2571/2560, merge=0/0, ticks=487/324, in_queue=811, util=88.28% 00:10:14.247 nvme0n2: ios=2389/2560, merge=0/0, ticks=458/351, in_queue=809, util=89.69% 00:10:14.247 nvme0n3: ios=1536/1646, merge=0/0, ticks=468/335, in_queue=803, util=89.21% 00:10:14.247 nvme0n4: ios=1452/1536, merge=0/0, ticks=439/355, in_queue=794, util=89.88% 00:10:14.247 12:34:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:14.247 [global] 00:10:14.247 thread=1 00:10:14.247 invalidate=1 00:10:14.247 rw=randwrite 00:10:14.247 time_based=1 00:10:14.247 runtime=1 00:10:14.247 ioengine=libaio 00:10:14.247 direct=1 00:10:14.247 bs=4096 00:10:14.247 iodepth=1 00:10:14.247 norandommap=0 00:10:14.247 numjobs=1 00:10:14.247 00:10:14.247 verify_dump=1 00:10:14.247 verify_backlog=512 00:10:14.247 verify_state_save=0 00:10:14.247 do_verify=1 00:10:14.247 verify=crc32c-intel 00:10:14.247 [job0] 00:10:14.247 filename=/dev/nvme0n1 00:10:14.247 [job1] 00:10:14.247 filename=/dev/nvme0n2 00:10:14.247 [job2] 00:10:14.247 filename=/dev/nvme0n3 00:10:14.247 [job3] 00:10:14.247 filename=/dev/nvme0n4 00:10:14.247 Could not set queue depth (nvme0n1) 00:10:14.247 Could not set queue depth (nvme0n2) 00:10:14.247 Could not set queue depth (nvme0n3) 00:10:14.247 Could not set queue depth (nvme0n4) 00:10:14.506 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.506 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.506 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.506 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.506 fio-3.35 00:10:14.506 Starting 4 threads 00:10:15.440 00:10:15.440 job0: (groupid=0, jobs=1): err= 0: pid=68872: Fri Jul 12 12:34:41 2024 00:10:15.440 read: IOPS=2990, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1001msec) 00:10:15.440 slat (nsec): min=12578, max=45888, avg=14794.66, stdev=2758.47 00:10:15.440 clat (usec): min=138, max=873, avg=165.20, stdev=23.17 00:10:15.440 lat (usec): min=151, max=887, avg=180.00, stdev=23.52 00:10:15.440 clat percentiles (usec): 00:10:15.440 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 153], 00:10:15.440 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:10:15.440 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:10:15.440 | 99.00th=[ 206], 99.50th=[ 221], 99.90th=[ 562], 99.95th=[ 578], 00:10:15.440 | 99.99th=[ 873] 00:10:15.441 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:15.441 slat (usec): min=15, max=161, avg=22.48, stdev= 5.18 00:10:15.441 clat (usec): min=93, max=420, avg=123.61, stdev=17.48 00:10:15.441 lat (usec): min=114, max=441, avg=146.09, stdev=18.64 00:10:15.441 clat percentiles (usec): 00:10:15.441 | 1.00th=[ 100], 5.00th=[ 105], 10.00th=[ 109], 20.00th=[ 114], 00:10:15.441 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 125], 00:10:15.441 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 145], 00:10:15.441 | 99.00th=[ 165], 99.50th=[ 202], 99.90th=[ 359], 99.95th=[ 367], 00:10:15.441 | 99.99th=[ 420] 00:10:15.441 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:15.441 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:15.441 lat (usec) : 100=0.59%, 250=99.03%, 500=0.33%, 750=0.03%, 1000=0.02% 00:10:15.441 cpu : usr=2.70%, sys=8.90%, ctx=6067, majf=0, minf=18 00:10:15.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.441 issued rwts: total=2993,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.441 job1: (groupid=0, jobs=1): err= 0: pid=68873: Fri Jul 12 12:34:41 2024 00:10:15.441 read: IOPS=2421, BW=9686KiB/s (9919kB/s)(9696KiB/1001msec) 00:10:15.441 slat (nsec): min=9252, max=35750, avg=12227.19, stdev=2142.51 00:10:15.441 clat (usec): min=140, max=368, avg=203.16, stdev=49.09 00:10:15.441 lat (usec): min=153, max=378, avg=215.38, stdev=47.72 00:10:15.441 clat percentiles (usec): 00:10:15.441 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:10:15.441 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 239], 00:10:15.441 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 273], 00:10:15.441 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 330], 99.95th=[ 334], 00:10:15.441 | 99.99th=[ 367] 00:10:15.441 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:15.441 slat (nsec): min=11700, max=56956, avg=18501.44, stdev=4125.81 00:10:15.441 clat (usec): min=94, max=2644, avg=164.75, stdev=84.45 00:10:15.441 lat (usec): min=112, max=2660, avg=183.25, stdev=84.06 00:10:15.441 clat percentiles (usec): 00:10:15.441 | 1.00th=[ 100], 5.00th=[ 105], 10.00th=[ 110], 20.00th=[ 117], 00:10:15.441 | 30.00th=[ 124], 40.00th=[ 131], 50.00th=[ 176], 60.00th=[ 188], 00:10:15.441 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 223], 00:10:15.441 | 99.00th=[ 245], 99.50th=[ 273], 99.90th=[ 1516], 99.95th=[ 1729], 00:10:15.441 | 99.99th=[ 2638] 00:10:15.441 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:15.441 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:15.441 lat (usec) : 100=0.60%, 250=84.41%, 500=14.83%, 750=0.02%, 1000=0.06% 00:10:15.441 lat (msec) : 2=0.06%, 4=0.02% 00:10:15.441 cpu : usr=1.90%, sys=6.10%, ctx=4986, majf=0, minf=9 00:10:15.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.441 issued rwts: total=2424,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.441 job2: (groupid=0, jobs=1): err= 0: pid=68875: Fri Jul 12 12:34:41 2024 00:10:15.441 read: IOPS=2311, BW=9247KiB/s (9469kB/s)(9256KiB/1001msec) 00:10:15.441 slat (nsec): min=8940, max=56225, avg=15083.86, stdev=2831.40 00:10:15.441 clat (usec): min=148, max=2650, avg=216.26, stdev=71.96 00:10:15.441 lat (usec): min=163, max=2664, avg=231.34, stdev=71.31 00:10:15.441 clat percentiles (usec): 00:10:15.441 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:10:15.441 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 202], 00:10:15.441 | 70.00th=[ 249], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 306], 00:10:15.441 | 99.00th=[ 359], 99.50th=[ 375], 99.90th=[ 635], 99.95th=[ 816], 00:10:15.441 | 99.99th=[ 2638] 00:10:15.441 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:15.441 slat (usec): min=15, max=1459, avg=24.13, stdev=29.08 00:10:15.441 clat (usec): min=3, max=334, avg=153.62, stdev=34.13 00:10:15.441 lat (usec): min=117, max=1462, avg=177.75, stdev=41.47 00:10:15.441 clat percentiles (usec): 00:10:15.441 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 123], 20.00th=[ 127], 00:10:15.441 | 30.00th=[ 130], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 147], 00:10:15.441 | 70.00th=[ 165], 80.00th=[ 192], 90.00th=[ 208], 95.00th=[ 219], 00:10:15.441 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 273], 99.95th=[ 277], 00:10:15.441 | 99.99th=[ 334] 00:10:15.441 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:15.441 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:15.441 lat (usec) : 4=0.02%, 100=0.04%, 250=85.82%, 500=14.05%, 750=0.02% 00:10:15.441 lat (usec) : 1000=0.02% 00:10:15.441 lat (msec) : 4=0.02% 00:10:15.441 cpu : usr=2.00%, sys=7.70%, ctx=4886, majf=0, minf=13 00:10:15.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.441 issued rwts: total=2314,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.441 job3: (groupid=0, jobs=1): err= 0: pid=68880: Fri Jul 12 12:34:41 2024 00:10:15.441 read: IOPS=1909, BW=7636KiB/s (7820kB/s)(7644KiB/1001msec) 00:10:15.441 slat (nsec): min=8838, max=68292, avg=14040.96, stdev=2624.17 00:10:15.441 clat (usec): min=216, max=1682, avg=269.57, stdev=47.13 00:10:15.441 lat (usec): min=230, max=1696, avg=283.61, stdev=47.38 00:10:15.441 clat percentiles (usec): 00:10:15.441 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 247], 00:10:15.441 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:10:15.441 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 326], 95.00th=[ 351], 00:10:15.441 | 99.00th=[ 379], 99.50th=[ 388], 99.90th=[ 685], 99.95th=[ 1680], 00:10:15.441 | 99.99th=[ 1680] 00:10:15.441 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:15.441 slat (nsec): min=11620, max=86046, avg=20057.44, stdev=5122.96 00:10:15.441 clat (usec): min=130, max=2361, avg=200.28, stdev=78.48 00:10:15.441 lat (usec): min=151, max=2392, avg=220.33, stdev=79.03 00:10:15.441 clat percentiles (usec): 00:10:15.441 | 1.00th=[ 139], 5.00th=[ 159], 10.00th=[ 174], 20.00th=[ 182], 00:10:15.441 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:10:15.441 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 231], 00:10:15.441 | 99.00th=[ 258], 99.50th=[ 302], 99.90th=[ 1237], 99.95th=[ 2114], 00:10:15.441 | 99.99th=[ 2376] 00:10:15.441 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:15.441 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:15.441 lat (usec) : 250=64.74%, 500=35.01%, 750=0.10% 00:10:15.441 lat (msec) : 2=0.10%, 4=0.05% 00:10:15.441 cpu : usr=1.40%, sys=6.00%, ctx=3960, majf=0, minf=5 00:10:15.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:15.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.441 issued rwts: total=1911,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:15.441 00:10:15.441 Run status group 0 (all jobs): 00:10:15.441 READ: bw=37.6MiB/s (39.5MB/s), 7636KiB/s-11.7MiB/s (7820kB/s-12.2MB/s), io=37.7MiB (39.5MB), run=1001-1001msec 00:10:15.441 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:10:15.441 00:10:15.441 Disk stats (read/write): 00:10:15.441 nvme0n1: ios=2610/2729, merge=0/0, ticks=445/359, in_queue=804, util=89.17% 00:10:15.441 nvme0n2: ios=2091/2124, merge=0/0, ticks=437/332, in_queue=769, util=88.59% 00:10:15.441 nvme0n3: ios=2048/2283, merge=0/0, ticks=441/357, in_queue=798, util=89.32% 00:10:15.441 nvme0n4: ios=1536/1981, merge=0/0, ticks=400/384, in_queue=784, util=89.67% 00:10:15.441 12:34:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:15.706 [global] 00:10:15.706 thread=1 00:10:15.706 invalidate=1 00:10:15.706 rw=write 00:10:15.706 time_based=1 00:10:15.706 runtime=1 00:10:15.706 ioengine=libaio 00:10:15.706 direct=1 00:10:15.706 bs=4096 00:10:15.706 iodepth=128 00:10:15.706 norandommap=0 00:10:15.706 numjobs=1 00:10:15.706 00:10:15.706 verify_dump=1 00:10:15.706 verify_backlog=512 00:10:15.706 verify_state_save=0 00:10:15.706 do_verify=1 00:10:15.706 verify=crc32c-intel 00:10:15.706 [job0] 00:10:15.706 filename=/dev/nvme0n1 00:10:15.706 [job1] 00:10:15.706 filename=/dev/nvme0n2 00:10:15.706 [job2] 00:10:15.706 filename=/dev/nvme0n3 00:10:15.706 [job3] 00:10:15.706 filename=/dev/nvme0n4 00:10:15.706 Could not set queue depth (nvme0n1) 00:10:15.706 Could not set queue depth (nvme0n2) 00:10:15.706 Could not set queue depth (nvme0n3) 00:10:15.706 Could not set queue depth (nvme0n4) 00:10:15.706 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.706 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.706 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.706 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.706 fio-3.35 00:10:15.706 Starting 4 threads 00:10:17.079 00:10:17.079 job0: (groupid=0, jobs=1): err= 0: pid=68935: Fri Jul 12 12:34:42 2024 00:10:17.079 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:10:17.079 slat (usec): min=6, max=4356, avg=85.29, stdev=397.14 00:10:17.079 clat (usec): min=3923, max=15002, avg=11434.42, stdev=1125.52 00:10:17.079 lat (usec): min=3936, max=15029, avg=11519.72, stdev=1061.48 00:10:17.079 clat percentiles (usec): 00:10:17.079 | 1.00th=[ 7242], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:10:17.079 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:10:17.079 | 70.00th=[11731], 80.00th=[12387], 90.00th=[12780], 95.00th=[13042], 00:10:17.079 | 99.00th=[14615], 99.50th=[14877], 99.90th=[15008], 99.95th=[15008], 00:10:17.079 | 99.99th=[15008] 00:10:17.079 write: IOPS=5641, BW=22.0MiB/s (23.1MB/s)(22.1MiB/1002msec); 0 zone resets 00:10:17.079 slat (usec): min=10, max=3036, avg=83.74, stdev=334.96 00:10:17.079 clat (usec): min=1207, max=12729, avg=10996.43, stdev=945.67 00:10:17.079 lat (usec): min=1236, max=13190, avg=11080.17, stdev=887.96 00:10:17.079 clat percentiles (usec): 00:10:17.079 | 1.00th=[ 8717], 5.00th=[10028], 10.00th=[10159], 20.00th=[10421], 00:10:17.079 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:10:17.079 | 70.00th=[11469], 80.00th=[11863], 90.00th=[12125], 95.00th=[12256], 00:10:17.079 | 99.00th=[12387], 99.50th=[12518], 99.90th=[12649], 99.95th=[12649], 00:10:17.079 | 99.99th=[12780] 00:10:17.079 bw ( KiB/s): min=22480, max=22576, per=32.00%, avg=22528.00, stdev=67.88, samples=2 00:10:17.079 iops : min= 5620, max= 5644, avg=5632.00, stdev=16.97, samples=2 00:10:17.079 lat (msec) : 2=0.19%, 4=0.04%, 10=4.02%, 20=95.75% 00:10:17.079 cpu : usr=4.90%, sys=15.98%, ctx=355, majf=0, minf=7 00:10:17.079 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:17.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.079 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:17.079 issued rwts: total=5632,5653,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.079 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:17.079 job1: (groupid=0, jobs=1): err= 0: pid=68936: Fri Jul 12 12:34:42 2024 00:10:17.079 read: IOPS=5233, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1003msec) 00:10:17.079 slat (usec): min=6, max=3487, avg=88.31, stdev=409.83 00:10:17.079 clat (usec): min=314, max=14468, avg=11750.62, stdev=1499.06 00:10:17.079 lat (usec): min=2418, max=14516, avg=11838.93, stdev=1451.08 00:10:17.079 clat percentiles (usec): 00:10:17.079 | 1.00th=[ 5342], 5.00th=[10552], 10.00th=[10814], 20.00th=[10945], 00:10:17.079 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:10:17.079 | 70.00th=[12518], 80.00th=[13304], 90.00th=[13829], 95.00th=[13960], 00:10:17.079 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14484], 99.95th=[14484], 00:10:17.079 | 99.99th=[14484] 00:10:17.079 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:10:17.079 slat (usec): min=10, max=3301, avg=87.04, stdev=352.13 00:10:17.079 clat (usec): min=8259, max=15025, avg=11543.88, stdev=1161.54 00:10:17.079 lat (usec): min=9292, max=15050, avg=11630.92, stdev=1113.51 00:10:17.079 clat percentiles (usec): 00:10:17.079 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[10421], 20.00th=[10683], 00:10:17.079 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:10:17.079 | 70.00th=[11863], 80.00th=[12649], 90.00th=[13304], 95.00th=[13698], 00:10:17.079 | 99.00th=[14877], 99.50th=[14877], 99.90th=[15008], 99.95th=[15008], 00:10:17.079 | 99.99th=[15008] 00:10:17.079 bw ( KiB/s): min=20480, max=24576, per=32.00%, avg=22528.00, stdev=2896.31, samples=2 00:10:17.080 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:10:17.080 lat (usec) : 500=0.01% 00:10:17.080 lat (msec) : 4=0.29%, 10=2.86%, 20=96.84% 00:10:17.080 cpu : usr=6.19%, sys=15.37%, ctx=342, majf=0, minf=9 00:10:17.080 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:17.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:17.080 issued rwts: total=5249,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:17.080 job2: (groupid=0, jobs=1): err= 0: pid=68937: Fri Jul 12 12:34:42 2024 00:10:17.080 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:10:17.080 slat (usec): min=9, max=5961, avg=162.09, stdev=650.26 00:10:17.080 clat (usec): min=10930, max=36103, avg=21064.99, stdev=5853.21 00:10:17.080 lat (usec): min=13214, max=36119, avg=21227.07, stdev=5871.67 00:10:17.080 clat percentiles (usec): 00:10:17.080 | 1.00th=[11731], 5.00th=[13960], 10.00th=[14484], 20.00th=[14615], 00:10:17.080 | 30.00th=[14746], 40.00th=[18220], 50.00th=[22676], 60.00th=[25035], 00:10:17.080 | 70.00th=[25560], 80.00th=[25822], 90.00th=[28181], 95.00th=[29230], 00:10:17.080 | 99.00th=[32900], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:10:17.080 | 99.99th=[35914] 00:10:17.080 write: IOPS=3285, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1003msec); 0 zone resets 00:10:17.080 slat (usec): min=9, max=6517, avg=143.73, stdev=585.16 00:10:17.080 clat (usec): min=2482, max=36697, avg=18778.63, stdev=6011.33 00:10:17.080 lat (usec): min=2594, max=36717, avg=18922.37, stdev=6030.14 00:10:17.080 clat percentiles (usec): 00:10:17.080 | 1.00th=[ 9503], 5.00th=[12649], 10.00th=[12911], 20.00th=[13173], 00:10:17.080 | 30.00th=[13435], 40.00th=[13829], 50.00th=[16712], 60.00th=[22152], 00:10:17.080 | 70.00th=[24249], 80.00th=[25297], 90.00th=[26346], 95.00th=[27657], 00:10:17.080 | 99.00th=[30278], 99.50th=[31065], 99.90th=[36439], 99.95th=[36439], 00:10:17.080 | 99.99th=[36439] 00:10:17.080 bw ( KiB/s): min= 8960, max=16384, per=18.00%, avg=12672.00, stdev=5249.56, samples=2 00:10:17.080 iops : min= 2240, max= 4096, avg=3168.00, stdev=1312.39, samples=2 00:10:17.080 lat (msec) : 4=0.03%, 10=0.57%, 20=49.19%, 50=50.21% 00:10:17.080 cpu : usr=3.79%, sys=9.58%, ctx=621, majf=0, minf=11 00:10:17.080 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:17.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:17.080 issued rwts: total=3072,3295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:17.080 job3: (groupid=0, jobs=1): err= 0: pid=68938: Fri Jul 12 12:34:42 2024 00:10:17.080 read: IOPS=3014, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1003msec) 00:10:17.080 slat (usec): min=4, max=7482, avg=170.82, stdev=674.92 00:10:17.080 clat (usec): min=2020, max=35060, avg=22022.92, stdev=6037.97 00:10:17.080 lat (usec): min=5032, max=35078, avg=22193.74, stdev=6043.33 00:10:17.080 clat percentiles (usec): 00:10:17.080 | 1.00th=[10552], 5.00th=[13435], 10.00th=[13960], 20.00th=[14615], 00:10:17.080 | 30.00th=[16319], 40.00th=[23200], 50.00th=[25035], 60.00th=[25560], 00:10:17.080 | 70.00th=[25822], 80.00th=[26870], 90.00th=[28705], 95.00th=[30016], 00:10:17.080 | 99.00th=[31065], 99.50th=[33817], 99.90th=[34866], 99.95th=[34866], 00:10:17.080 | 99.99th=[34866] 00:10:17.080 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:10:17.080 slat (usec): min=4, max=7936, avg=149.15, stdev=622.07 00:10:17.080 clat (usec): min=10612, max=30523, avg=19524.36, stdev=5334.36 00:10:17.080 lat (usec): min=11200, max=30547, avg=19673.51, stdev=5344.64 00:10:17.080 clat percentiles (usec): 00:10:17.080 | 1.00th=[11338], 5.00th=[13173], 10.00th=[13435], 20.00th=[13829], 00:10:17.080 | 30.00th=[14222], 40.00th=[15664], 50.00th=[19530], 60.00th=[22676], 00:10:17.080 | 70.00th=[24773], 80.00th=[25297], 90.00th=[26084], 95.00th=[26608], 00:10:17.080 | 99.00th=[28443], 99.50th=[29230], 99.90th=[29230], 99.95th=[29754], 00:10:17.080 | 99.99th=[30540] 00:10:17.080 bw ( KiB/s): min= 8704, max=15872, per=17.46%, avg=12288.00, stdev=5068.54, samples=2 00:10:17.080 iops : min= 2176, max= 3968, avg=3072.00, stdev=1267.14, samples=2 00:10:17.080 lat (msec) : 4=0.02%, 10=0.46%, 20=43.91%, 50=55.61% 00:10:17.080 cpu : usr=2.89%, sys=9.28%, ctx=617, majf=0, minf=14 00:10:17.080 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:17.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:17.080 issued rwts: total=3024,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:17.080 00:10:17.080 Run status group 0 (all jobs): 00:10:17.080 READ: bw=66.1MiB/s (69.3MB/s), 11.8MiB/s-22.0MiB/s (12.3MB/s-23.0MB/s), io=66.3MiB (69.5MB), run=1002-1003msec 00:10:17.080 WRITE: bw=68.7MiB/s (72.1MB/s), 12.0MiB/s-22.0MiB/s (12.5MB/s-23.1MB/s), io=69.0MiB (72.3MB), run=1002-1003msec 00:10:17.080 00:10:17.080 Disk stats (read/write): 00:10:17.080 nvme0n1: ios=4657/4928, merge=0/0, ticks=11906/11471, in_queue=23377, util=87.06% 00:10:17.080 nvme0n2: ios=4549/4608, merge=0/0, ticks=11796/11256, in_queue=23052, util=86.70% 00:10:17.080 nvme0n3: ios=2560/3038, merge=0/0, ticks=12252/12259, in_queue=24511, util=88.76% 00:10:17.080 nvme0n4: ios=2560/2797, merge=0/0, ticks=12795/11631, in_queue=24426, util=88.56% 00:10:17.080 12:34:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:17.080 [global] 00:10:17.080 thread=1 00:10:17.080 invalidate=1 00:10:17.080 rw=randwrite 00:10:17.080 time_based=1 00:10:17.080 runtime=1 00:10:17.080 ioengine=libaio 00:10:17.080 direct=1 00:10:17.080 bs=4096 00:10:17.080 iodepth=128 00:10:17.080 norandommap=0 00:10:17.080 numjobs=1 00:10:17.080 00:10:17.080 verify_dump=1 00:10:17.080 verify_backlog=512 00:10:17.080 verify_state_save=0 00:10:17.080 do_verify=1 00:10:17.080 verify=crc32c-intel 00:10:17.080 [job0] 00:10:17.080 filename=/dev/nvme0n1 00:10:17.080 [job1] 00:10:17.080 filename=/dev/nvme0n2 00:10:17.080 [job2] 00:10:17.080 filename=/dev/nvme0n3 00:10:17.080 [job3] 00:10:17.080 filename=/dev/nvme0n4 00:10:17.080 Could not set queue depth (nvme0n1) 00:10:17.080 Could not set queue depth (nvme0n2) 00:10:17.080 Could not set queue depth (nvme0n3) 00:10:17.080 Could not set queue depth (nvme0n4) 00:10:17.080 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:17.080 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:17.080 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:17.080 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:17.080 fio-3.35 00:10:17.080 Starting 4 threads 00:10:18.454 00:10:18.454 job0: (groupid=0, jobs=1): err= 0: pid=68991: Fri Jul 12 12:34:44 2024 00:10:18.454 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:10:18.454 slat (usec): min=7, max=13297, avg=182.55, stdev=1236.54 00:10:18.454 clat (usec): min=14359, max=40605, avg=25068.28, stdev=2849.33 00:10:18.454 lat (usec): min=14377, max=50689, avg=25250.83, stdev=2907.72 00:10:18.454 clat percentiles (usec): 00:10:18.454 | 1.00th=[15270], 5.00th=[22676], 10.00th=[23462], 20.00th=[24249], 00:10:18.454 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:10:18.454 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26608], 95.00th=[28443], 00:10:18.454 | 99.00th=[36963], 99.50th=[38011], 99.90th=[40633], 99.95th=[40633], 00:10:18.454 | 99.99th=[40633] 00:10:18.454 write: IOPS=2769, BW=10.8MiB/s (11.3MB/s)(10.9MiB/1004msec); 0 zone resets 00:10:18.454 slat (usec): min=6, max=19347, avg=182.80, stdev=1227.33 00:10:18.454 clat (usec): min=3627, max=33058, avg=22681.94, stdev=3410.43 00:10:18.454 lat (usec): min=3651, max=33085, avg=22864.74, stdev=3247.83 00:10:18.454 clat percentiles (usec): 00:10:18.454 | 1.00th=[ 4424], 5.00th=[16319], 10.00th=[20841], 20.00th=[21365], 00:10:18.454 | 30.00th=[21890], 40.00th=[22676], 50.00th=[22938], 60.00th=[23462], 00:10:18.454 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24773], 95.00th=[26608], 00:10:18.454 | 99.00th=[32637], 99.50th=[32900], 99.90th=[32900], 99.95th=[33162], 00:10:18.454 | 99.99th=[33162] 00:10:18.454 bw ( KiB/s): min= 8952, max=12280, per=15.91%, avg=10616.00, stdev=2353.25, samples=2 00:10:18.454 iops : min= 2238, max= 3070, avg=2654.00, stdev=588.31, samples=2 00:10:18.454 lat (msec) : 4=0.26%, 10=0.32%, 20=4.94%, 50=94.48% 00:10:18.454 cpu : usr=2.09%, sys=8.57%, ctx=117, majf=0, minf=7 00:10:18.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:18.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.454 issued rwts: total=2560,2781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.454 job1: (groupid=0, jobs=1): err= 0: pid=68992: Fri Jul 12 12:34:44 2024 00:10:18.454 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:10:18.454 slat (usec): min=7, max=5532, avg=81.20, stdev=481.31 00:10:18.454 clat (usec): min=7238, max=18093, avg=11438.25, stdev=1142.15 00:10:18.454 lat (usec): min=7261, max=21393, avg=11519.45, stdev=1169.95 00:10:18.454 clat percentiles (usec): 00:10:18.454 | 1.00th=[ 7701], 5.00th=[10421], 10.00th=[10945], 20.00th=[11076], 00:10:18.454 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:10:18.454 | 70.00th=[11731], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:10:18.454 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17957], 99.95th=[17957], 00:10:18.454 | 99.99th=[18220] 00:10:18.454 write: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(23.9MiB/1003msec); 0 zone resets 00:10:18.454 slat (usec): min=11, max=6247, avg=80.69, stdev=419.54 00:10:18.454 clat (usec): min=2047, max=13659, avg=10248.63, stdev=1042.72 00:10:18.454 lat (usec): min=2075, max=13815, avg=10329.32, stdev=974.54 00:10:18.454 clat percentiles (usec): 00:10:18.454 | 1.00th=[ 6652], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[ 9765], 00:10:18.454 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:10:18.454 | 70.00th=[10683], 80.00th=[10814], 90.00th=[10945], 95.00th=[11076], 00:10:18.454 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13698], 99.95th=[13698], 00:10:18.454 | 99.99th=[13698] 00:10:18.454 bw ( KiB/s): min=23272, max=24625, per=35.88%, avg=23948.50, stdev=956.72, samples=2 00:10:18.454 iops : min= 5818, max= 6156, avg=5987.00, stdev=239.00, samples=2 00:10:18.454 lat (msec) : 4=0.26%, 10=18.47%, 20=81.28% 00:10:18.454 cpu : usr=5.19%, sys=18.26%, ctx=253, majf=0, minf=17 00:10:18.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:18.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.454 issued rwts: total=5632,6108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.454 job2: (groupid=0, jobs=1): err= 0: pid=68993: Fri Jul 12 12:34:44 2024 00:10:18.454 read: IOPS=5014, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1002msec) 00:10:18.454 slat (usec): min=8, max=2919, avg=95.45, stdev=442.74 00:10:18.454 clat (usec): min=361, max=13726, avg=12642.07, stdev=1088.82 00:10:18.454 lat (usec): min=2740, max=13764, avg=12737.52, stdev=995.66 00:10:18.454 clat percentiles (usec): 00:10:18.454 | 1.00th=[ 6521], 5.00th=[11994], 10.00th=[12387], 20.00th=[12649], 00:10:18.454 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12780], 60.00th=[12911], 00:10:18.454 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13173], 95.00th=[13304], 00:10:18.454 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13566], 99.95th=[13698], 00:10:18.454 | 99.99th=[13698] 00:10:18.454 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:18.454 slat (usec): min=11, max=2884, avg=93.36, stdev=374.64 00:10:18.454 clat (usec): min=9465, max=13139, avg=12309.00, stdev=455.64 00:10:18.454 lat (usec): min=10711, max=13282, avg=12402.37, stdev=260.64 00:10:18.454 clat percentiles (usec): 00:10:18.454 | 1.00th=[10028], 5.00th=[11731], 10.00th=[11994], 20.00th=[12125], 00:10:18.454 | 30.00th=[12256], 40.00th=[12256], 50.00th=[12387], 60.00th=[12387], 00:10:18.454 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12649], 95.00th=[12780], 00:10:18.454 | 99.00th=[12911], 99.50th=[12911], 99.90th=[13042], 99.95th=[13173], 00:10:18.454 | 99.99th=[13173] 00:10:18.454 bw ( KiB/s): min=20480, max=20521, per=30.71%, avg=20500.50, stdev=28.99, samples=2 00:10:18.454 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:10:18.454 lat (usec) : 500=0.01% 00:10:18.454 lat (msec) : 4=0.32%, 10=1.11%, 20=98.56% 00:10:18.454 cpu : usr=4.50%, sys=15.98%, ctx=359, majf=0, minf=11 00:10:18.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:18.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.454 issued rwts: total=5025,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.454 job3: (groupid=0, jobs=1): err= 0: pid=68994: Fri Jul 12 12:34:44 2024 00:10:18.454 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:10:18.454 slat (usec): min=9, max=13428, avg=182.67, stdev=1237.53 00:10:18.454 clat (usec): min=14155, max=42260, avg=25048.97, stdev=2888.20 00:10:18.454 lat (usec): min=14177, max=50759, avg=25231.64, stdev=2946.02 00:10:18.454 clat percentiles (usec): 00:10:18.454 | 1.00th=[15008], 5.00th=[22676], 10.00th=[23462], 20.00th=[24249], 00:10:18.454 | 30.00th=[24773], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:10:18.454 | 70.00th=[25560], 80.00th=[26084], 90.00th=[26608], 95.00th=[28443], 00:10:18.454 | 99.00th=[37487], 99.50th=[38011], 99.90th=[42206], 99.95th=[42206], 00:10:18.454 | 99.99th=[42206] 00:10:18.454 write: IOPS=2747, BW=10.7MiB/s (11.3MB/s)(10.8MiB/1005msec); 0 zone resets 00:10:18.454 slat (usec): min=8, max=19267, avg=184.26, stdev=1238.00 00:10:18.454 clat (usec): min=4246, max=33454, avg=22881.25, stdev=3049.99 00:10:18.454 lat (usec): min=4266, max=35855, avg=23065.51, stdev=2863.05 00:10:18.454 clat percentiles (usec): 00:10:18.454 | 1.00th=[13566], 5.00th=[17433], 10.00th=[20841], 20.00th=[21365], 00:10:18.454 | 30.00th=[22152], 40.00th=[22676], 50.00th=[23200], 60.00th=[23462], 00:10:18.454 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24773], 95.00th=[26608], 00:10:18.454 | 99.00th=[33162], 99.50th=[33162], 99.90th=[33424], 99.95th=[33424], 00:10:18.454 | 99.99th=[33424] 00:10:18.454 bw ( KiB/s): min= 8824, max=12280, per=15.81%, avg=10552.00, stdev=2443.76, samples=2 00:10:18.454 iops : min= 2206, max= 3070, avg=2638.00, stdev=610.94, samples=2 00:10:18.454 lat (msec) : 10=0.21%, 20=4.44%, 50=95.36% 00:10:18.454 cpu : usr=2.79%, sys=7.97%, ctx=117, majf=0, minf=15 00:10:18.454 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:18.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.454 issued rwts: total=2560,2761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.454 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.454 00:10:18.454 Run status group 0 (all jobs): 00:10:18.454 READ: bw=61.3MiB/s (64.3MB/s), 9.95MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=61.6MiB (64.6MB), run=1002-1005msec 00:10:18.454 WRITE: bw=65.2MiB/s (68.3MB/s), 10.7MiB/s-23.8MiB/s (11.3MB/s-24.9MB/s), io=65.5MiB (68.7MB), run=1002-1005msec 00:10:18.454 00:10:18.454 Disk stats (read/write): 00:10:18.454 nvme0n1: ios=2098/2368, merge=0/0, ticks=49705/52062, in_queue=101767, util=86.97% 00:10:18.454 nvme0n2: ios=4843/5120, merge=0/0, ticks=51341/47233, in_queue=98574, util=87.22% 00:10:18.454 nvme0n3: ios=4096/4544, merge=0/0, ticks=11627/11817, in_queue=23444, util=88.87% 00:10:18.454 nvme0n4: ios=2048/2368, merge=0/0, ticks=49705/52126, in_queue=101831, util=89.72% 00:10:18.454 12:34:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:18.454 12:34:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=69007 00:10:18.454 12:34:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:18.454 12:34:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:18.454 [global] 00:10:18.454 thread=1 00:10:18.454 invalidate=1 00:10:18.454 rw=read 00:10:18.454 time_based=1 00:10:18.454 runtime=10 00:10:18.454 ioengine=libaio 00:10:18.454 direct=1 00:10:18.454 bs=4096 00:10:18.454 iodepth=1 00:10:18.454 norandommap=1 00:10:18.454 numjobs=1 00:10:18.454 00:10:18.454 [job0] 00:10:18.454 filename=/dev/nvme0n1 00:10:18.454 [job1] 00:10:18.454 filename=/dev/nvme0n2 00:10:18.454 [job2] 00:10:18.454 filename=/dev/nvme0n3 00:10:18.454 [job3] 00:10:18.454 filename=/dev/nvme0n4 00:10:18.454 Could not set queue depth (nvme0n1) 00:10:18.454 Could not set queue depth (nvme0n2) 00:10:18.454 Could not set queue depth (nvme0n3) 00:10:18.454 Could not set queue depth (nvme0n4) 00:10:18.454 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.454 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.454 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.454 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:18.454 fio-3.35 00:10:18.454 Starting 4 threads 00:10:21.774 12:34:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:21.774 fio: pid=69061, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:21.774 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=40554496, buflen=4096 00:10:21.774 12:34:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:22.031 fio: pid=69060, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:22.031 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=68009984, buflen=4096 00:10:22.031 12:34:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:22.031 12:34:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:22.288 fio: pid=69058, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:22.288 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=6086656, buflen=4096 00:10:22.288 12:34:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:22.288 12:34:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:22.853 fio: pid=69059, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:22.853 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=62652416, buflen=4096 00:10:22.853 00:10:22.853 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69058: Fri Jul 12 12:34:48 2024 00:10:22.853 read: IOPS=4953, BW=19.3MiB/s (20.3MB/s)(69.8MiB/3608msec) 00:10:22.853 slat (usec): min=11, max=16324, avg=21.63, stdev=170.66 00:10:22.853 clat (usec): min=3, max=4307, avg=178.30, stdev=63.80 00:10:22.853 lat (usec): min=142, max=16562, avg=199.93, stdev=183.15 00:10:22.853 clat percentiles (usec): 00:10:22.853 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:10:22.853 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:10:22.853 | 70.00th=[ 184], 80.00th=[ 196], 90.00th=[ 212], 95.00th=[ 227], 00:10:22.853 | 99.00th=[ 269], 99.50th=[ 338], 99.90th=[ 685], 99.95th=[ 922], 00:10:22.853 | 99.99th=[ 3884] 00:10:22.853 bw ( KiB/s): min=17312, max=21560, per=33.33%, avg=19851.14, stdev=1464.19, samples=7 00:10:22.853 iops : min= 4328, max= 5390, avg=4962.71, stdev=366.07, samples=7 00:10:22.853 lat (usec) : 4=0.01%, 250=98.25%, 500=1.56%, 750=0.10%, 1000=0.03% 00:10:22.853 lat (msec) : 2=0.02%, 4=0.02%, 10=0.01% 00:10:22.854 cpu : usr=1.97%, sys=8.07%, ctx=17914, majf=0, minf=1 00:10:22.854 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.854 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.854 issued rwts: total=17871,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.854 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.854 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69059: Fri Jul 12 12:34:48 2024 00:10:22.854 read: IOPS=3817, BW=14.9MiB/s (15.6MB/s)(59.8MiB/4007msec) 00:10:22.854 slat (usec): min=12, max=13235, avg=24.75, stdev=195.24 00:10:22.854 clat (usec): min=30, max=3522, avg=234.99, stdev=74.78 00:10:22.854 lat (usec): min=145, max=13438, avg=259.74, stdev=209.23 00:10:22.854 clat percentiles (usec): 00:10:22.854 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 163], 00:10:22.854 | 30.00th=[ 180], 40.00th=[ 239], 50.00th=[ 251], 60.00th=[ 260], 00:10:22.854 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 318], 00:10:22.854 | 99.00th=[ 375], 99.50th=[ 400], 99.90th=[ 523], 99.95th=[ 824], 00:10:22.854 | 99.99th=[ 2933] 00:10:22.854 bw ( KiB/s): min=13224, max=18985, per=24.09%, avg=14347.57, stdev=2074.20, samples=7 00:10:22.854 iops : min= 3306, max= 4746, avg=3586.86, stdev=518.46, samples=7 00:10:22.854 lat (usec) : 50=0.01%, 250=49.09%, 500=50.79%, 750=0.05%, 1000=0.03% 00:10:22.854 lat (msec) : 2=0.01%, 4=0.03% 00:10:22.854 cpu : usr=1.62%, sys=7.01%, ctx=15308, majf=0, minf=1 00:10:22.854 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.854 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.854 issued rwts: total=15297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.854 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.854 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69060: Fri Jul 12 12:34:48 2024 00:10:22.854 read: IOPS=5050, BW=19.7MiB/s (20.7MB/s)(64.9MiB/3288msec) 00:10:22.854 slat (usec): min=12, max=12811, avg=17.53, stdev=131.40 00:10:22.854 clat (usec): min=145, max=2513, avg=178.80, stdev=45.30 00:10:22.854 lat (usec): min=160, max=13015, avg=196.33, stdev=139.45 00:10:22.854 clat percentiles (usec): 00:10:22.854 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:10:22.854 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:10:22.854 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 215], 00:10:22.854 | 99.00th=[ 260], 99.50th=[ 314], 99.90th=[ 791], 99.95th=[ 1074], 00:10:22.854 | 99.99th=[ 2212] 00:10:22.854 bw ( KiB/s): min=18984, max=21168, per=34.26%, avg=20409.33, stdev=870.16, samples=6 00:10:22.854 iops : min= 4746, max= 5292, avg=5102.33, stdev=217.54, samples=6 00:10:22.854 lat (usec) : 250=98.69%, 500=1.15%, 750=0.04%, 1000=0.05% 00:10:22.854 lat (msec) : 2=0.04%, 4=0.02% 00:10:22.854 cpu : usr=1.40%, sys=7.12%, ctx=16607, majf=0, minf=1 00:10:22.854 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.854 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.854 issued rwts: total=16605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.854 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.854 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=69061: Fri Jul 12 12:34:48 2024 00:10:22.854 read: IOPS=3276, BW=12.8MiB/s (13.4MB/s)(38.7MiB/3022msec) 00:10:22.854 slat (usec): min=12, max=127, avg=19.71, stdev= 6.43 00:10:22.854 clat (usec): min=149, max=3393, avg=283.24, stdev=63.36 00:10:22.854 lat (usec): min=164, max=3419, avg=302.95, stdev=65.08 00:10:22.854 clat percentiles (usec): 00:10:22.854 | 1.00th=[ 231], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 258], 00:10:22.854 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:10:22.854 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 326], 95.00th=[ 371], 00:10:22.854 | 99.00th=[ 420], 99.50th=[ 433], 99.90th=[ 611], 99.95th=[ 1385], 00:10:22.854 | 99.99th=[ 3392] 00:10:22.854 bw ( KiB/s): min=12544, max=13872, per=22.04%, avg=13126.67, stdev=461.36, samples=6 00:10:22.854 iops : min= 3136, max= 3468, avg=3281.67, stdev=115.34, samples=6 00:10:22.854 lat (usec) : 250=9.41%, 500=90.43%, 750=0.06%, 1000=0.02% 00:10:22.854 lat (msec) : 2=0.04%, 4=0.03% 00:10:22.854 cpu : usr=1.56%, sys=5.66%, ctx=9903, majf=0, minf=1 00:10:22.854 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:22.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.854 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:22.854 issued rwts: total=9902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:22.854 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:22.854 00:10:22.854 Run status group 0 (all jobs): 00:10:22.854 READ: bw=58.2MiB/s (61.0MB/s), 12.8MiB/s-19.7MiB/s (13.4MB/s-20.7MB/s), io=233MiB (244MB), run=3022-4007msec 00:10:22.854 00:10:22.854 Disk stats (read/write): 00:10:22.854 nvme0n1: ios=17871/0, merge=0/0, ticks=3290/0, in_queue=3290, util=95.09% 00:10:22.854 nvme0n2: ios=14414/0, merge=0/0, ticks=3514/0, in_queue=3514, util=95.31% 00:10:22.854 nvme0n3: ios=15716/0, merge=0/0, ticks=2846/0, in_queue=2846, util=95.98% 00:10:22.854 nvme0n4: ios=9376/0, merge=0/0, ticks=2678/0, in_queue=2678, util=96.72% 00:10:22.854 12:34:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:22.854 12:34:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:23.112 12:34:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.112 12:34:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:23.369 12:34:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.369 12:34:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:23.627 12:34:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.627 12:34:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:23.885 12:34:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:23.885 12:34:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:24.451 12:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:24.451 12:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 69007 00:10:24.451 12:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:24.451 12:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:24.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.451 12:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:24.451 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:24.451 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:24.451 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.451 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.451 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:24.451 nvmf hotplug test: fio failed as expected 00:10:24.451 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:24.451 12:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:24.451 12:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:24.451 12:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:24.709 rmmod nvme_tcp 00:10:24.709 rmmod nvme_fabrics 00:10:24.709 rmmod nvme_keyring 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 68631 ']' 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 68631 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 68631 ']' 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 68631 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68631 00:10:24.709 killing process with pid 68631 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68631' 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 68631 00:10:24.709 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 68631 00:10:24.968 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:24.968 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:24.968 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:24.968 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:24.968 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:24.968 12:34:50 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.968 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:24.968 12:34:50 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.968 12:34:51 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:24.968 ************************************ 00:10:24.968 END TEST nvmf_fio_target 00:10:24.968 ************************************ 00:10:24.968 00:10:24.968 real 0m20.280s 00:10:24.968 user 1m15.727s 00:10:24.968 sys 0m11.759s 00:10:24.968 12:34:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:24.968 12:34:51 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.224 12:34:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:25.225 12:34:51 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:25.225 12:34:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:25.225 12:34:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.225 12:34:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:25.225 ************************************ 00:10:25.225 START TEST nvmf_bdevio 00:10:25.225 ************************************ 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:25.225 * Looking for test storage... 00:10:25.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:25.225 Cannot find device "nvmf_tgt_br" 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:25.225 Cannot find device "nvmf_tgt_br2" 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:25.225 Cannot find device "nvmf_tgt_br" 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:25.225 Cannot find device "nvmf_tgt_br2" 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:10:25.225 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:25.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:25.482 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:25.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:10:25.482 00:10:25.482 --- 10.0.0.2 ping statistics --- 00:10:25.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.482 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:25.482 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:25.482 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:10:25.482 00:10:25.482 --- 10.0.0.3 ping statistics --- 00:10:25.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.482 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:25.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:10:25.482 00:10:25.482 --- 10.0.0.1 ping statistics --- 00:10:25.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.482 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:25.482 12:34:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:25.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.739 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=69330 00:10:25.739 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:25.739 12:34:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 69330 00:10:25.739 12:34:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 69330 ']' 00:10:25.739 12:34:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.739 12:34:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:25.739 12:34:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.739 12:34:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:25.739 12:34:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:25.739 [2024-07-12 12:34:51.611737] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:10:25.739 [2024-07-12 12:34:51.611853] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.739 [2024-07-12 12:34:51.746694] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:25.997 [2024-07-12 12:34:51.895991] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.997 [2024-07-12 12:34:51.896564] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.997 [2024-07-12 12:34:51.896895] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.997 [2024-07-12 12:34:51.897446] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.997 [2024-07-12 12:34:51.897659] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.997 [2024-07-12 12:34:51.898040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:25.997 [2024-07-12 12:34:51.898208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:25.997 [2024-07-12 12:34:51.898368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.997 [2024-07-12 12:34:51.898314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:25.997 [2024-07-12 12:34:51.953955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:26.562 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:26.562 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:10:26.562 12:34:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:26.562 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:26.562 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.562 12:34:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.562 12:34:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:26.562 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.562 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.562 [2024-07-12 12:34:52.590096] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.562 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.562 12:34:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:26.562 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.562 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.819 Malloc0 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:26.819 [2024-07-12 12:34:52.660453] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:26.819 { 00:10:26.819 "params": { 00:10:26.819 "name": "Nvme$subsystem", 00:10:26.819 "trtype": "$TEST_TRANSPORT", 00:10:26.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:26.819 "adrfam": "ipv4", 00:10:26.819 "trsvcid": "$NVMF_PORT", 00:10:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:26.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:26.819 "hdgst": ${hdgst:-false}, 00:10:26.819 "ddgst": ${ddgst:-false} 00:10:26.819 }, 00:10:26.819 "method": "bdev_nvme_attach_controller" 00:10:26.819 } 00:10:26.819 EOF 00:10:26.819 )") 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:26.819 12:34:52 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:26.819 "params": { 00:10:26.819 "name": "Nvme1", 00:10:26.819 "trtype": "tcp", 00:10:26.819 "traddr": "10.0.0.2", 00:10:26.819 "adrfam": "ipv4", 00:10:26.819 "trsvcid": "4420", 00:10:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:26.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:26.819 "hdgst": false, 00:10:26.819 "ddgst": false 00:10:26.819 }, 00:10:26.819 "method": "bdev_nvme_attach_controller" 00:10:26.819 }' 00:10:26.819 [2024-07-12 12:34:52.723565] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:10:26.819 [2024-07-12 12:34:52.723712] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69366 ] 00:10:27.077 [2024-07-12 12:34:52.893574] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:27.077 [2024-07-12 12:34:53.040552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.077 [2024-07-12 12:34:53.040674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.077 [2024-07-12 12:34:53.040671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.077 [2024-07-12 12:34:53.114925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:10:27.334 I/O targets: 00:10:27.334 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:27.334 00:10:27.334 00:10:27.334 CUnit - A unit testing framework for C - Version 2.1-3 00:10:27.334 http://cunit.sourceforge.net/ 00:10:27.334 00:10:27.334 00:10:27.334 Suite: bdevio tests on: Nvme1n1 00:10:27.334 Test: blockdev write read block ...passed 00:10:27.334 Test: blockdev write zeroes read block ...passed 00:10:27.334 Test: blockdev write zeroes read no split ...passed 00:10:27.334 Test: blockdev write zeroes read split ...passed 00:10:27.334 Test: blockdev write zeroes read split partial ...passed 00:10:27.334 Test: blockdev reset ...[2024-07-12 12:34:53.261462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:27.334 [2024-07-12 12:34:53.261602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9867c0 (9): Bad file descriptor 00:10:27.334 passed 00:10:27.334 Test: blockdev write read 8 blocks ...[2024-07-12 12:34:53.276926] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:27.334 passed 00:10:27.334 Test: blockdev write read size > 128k ...passed 00:10:27.334 Test: blockdev write read invalid size ...passed 00:10:27.334 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:27.334 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:27.334 Test: blockdev write read max offset ...passed 00:10:27.334 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:27.334 Test: blockdev writev readv 8 blocks ...passed 00:10:27.334 Test: blockdev writev readv 30 x 1block ...passed 00:10:27.334 Test: blockdev writev readv block ...passed 00:10:27.334 Test: blockdev writev readv size > 128k ...passed 00:10:27.334 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:27.334 Test: blockdev comparev and writev ...[2024-07-12 12:34:53.285237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.334 [2024-07-12 12:34:53.285440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:27.334 [2024-07-12 12:34:53.285470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.334 [2024-07-12 12:34:53.285483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:27.334 [2024-07-12 12:34:53.285798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.334 [2024-07-12 12:34:53.285817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:27.334 [2024-07-12 12:34:53.285834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.334 [2024-07-12 12:34:53.285845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:27.334 [2024-07-12 12:34:53.286123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.334 [2024-07-12 12:34:53.286141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:27.334 [2024-07-12 12:34:53.286158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.334 [2024-07-12 12:34:53.286169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:27.334 [2024-07-12 12:34:53.286469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.334 [2024-07-12 12:34:53.286492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:27.334 [2024-07-12 12:34:53.286509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:27.334 [2024-07-12 12:34:53.286520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:27.334 passed 00:10:27.334 Test: blockdev nvme passthru rw ...passed 00:10:27.334 Test: blockdev nvme passthru vendor specific ...[2024-07-12 12:34:53.287609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:27.334 [2024-07-12 12:34:53.287635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:27.334 [2024-07-12 12:34:53.287748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:27.334 [2024-07-12 12:34:53.287772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:27.334 [2024-07-12 12:34:53.287872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:27.334 [2024-07-12 12:34:53.287888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:27.334 [2024-07-12 12:34:53.287993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:27.334 [2024-07-12 12:34:53.288010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:27.334 passed 00:10:27.334 Test: blockdev nvme admin passthru ...passed 00:10:27.334 Test: blockdev copy ...passed 00:10:27.334 00:10:27.334 Run Summary: Type Total Ran Passed Failed Inactive 00:10:27.334 suites 1 1 n/a 0 0 00:10:27.334 tests 23 23 23 0 0 00:10:27.334 asserts 152 152 152 0 n/a 00:10:27.334 00:10:27.334 Elapsed time = 0.156 seconds 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:27.658 rmmod nvme_tcp 00:10:27.658 rmmod nvme_fabrics 00:10:27.658 rmmod nvme_keyring 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 69330 ']' 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 69330 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 69330 ']' 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 69330 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69330 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:10:27.658 killing process with pid 69330 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69330' 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 69330 00:10:27.658 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 69330 00:10:27.937 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:27.937 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:27.937 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:27.937 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:27.937 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:27.937 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.937 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.937 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.937 12:34:53 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:27.937 ************************************ 00:10:27.937 END TEST nvmf_bdevio 00:10:27.937 ************************************ 00:10:27.937 00:10:27.937 real 0m2.907s 00:10:27.937 user 0m9.528s 00:10:27.937 sys 0m0.810s 00:10:27.937 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:27.937 12:34:53 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.196 12:34:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:28.196 12:34:54 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:28.196 12:34:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:28.196 12:34:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.196 12:34:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:28.196 ************************************ 00:10:28.196 START TEST nvmf_auth_target 00:10:28.196 ************************************ 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:10:28.196 * Looking for test storage... 00:10:28.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:28.196 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:28.197 Cannot find device "nvmf_tgt_br" 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:28.197 Cannot find device "nvmf_tgt_br2" 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:28.197 Cannot find device "nvmf_tgt_br" 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:28.197 Cannot find device "nvmf_tgt_br2" 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:28.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:28.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:28.197 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:28.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:10:28.455 00:10:28.455 --- 10.0.0.2 ping statistics --- 00:10:28.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.455 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:28.455 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:28.455 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:10:28.455 00:10:28.455 --- 10.0.0.3 ping statistics --- 00:10:28.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.455 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:28.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:10:28.455 00:10:28.455 --- 10.0.0.1 ping statistics --- 00:10:28.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.455 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=69535 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 69535 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69535 ']' 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:28.455 12:34:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.828 12:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=69572 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=41c521671dbec2da6784b9a909f901a1a0f287a2a60add49 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.oNo 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 41c521671dbec2da6784b9a909f901a1a0f287a2a60add49 0 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 41c521671dbec2da6784b9a909f901a1a0f287a2a60add49 0 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=41c521671dbec2da6784b9a909f901a1a0f287a2a60add49 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.oNo 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.oNo 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.oNo 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=49846153ccde619d46ceeedcf6308bc58a07fbcb9622d171b59b99d7225938f9 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.fXk 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 49846153ccde619d46ceeedcf6308bc58a07fbcb9622d171b59b99d7225938f9 3 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 49846153ccde619d46ceeedcf6308bc58a07fbcb9622d171b59b99d7225938f9 3 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=49846153ccde619d46ceeedcf6308bc58a07fbcb9622d171b59b99d7225938f9 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.fXk 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.fXk 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.fXk 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d1c02178c270996bb470f7f459bab8b9 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.CnX 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d1c02178c270996bb470f7f459bab8b9 1 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d1c02178c270996bb470f7f459bab8b9 1 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d1c02178c270996bb470f7f459bab8b9 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.CnX 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.CnX 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.CnX 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ac8a6f1e551d2d47cca4c2248797d8060bb5118b929cc630 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.O6C 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ac8a6f1e551d2d47cca4c2248797d8060bb5118b929cc630 2 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ac8a6f1e551d2d47cca4c2248797d8060bb5118b929cc630 2 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ac8a6f1e551d2d47cca4c2248797d8060bb5118b929cc630 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:29.829 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:30.087 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.O6C 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.O6C 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.O6C 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a80956dc226850ac0adc512592a40b97535b47db8d6f43a0 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Aa4 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a80956dc226850ac0adc512592a40b97535b47db8d6f43a0 2 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a80956dc226850ac0adc512592a40b97535b47db8d6f43a0 2 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a80956dc226850ac0adc512592a40b97535b47db8d6f43a0 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Aa4 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Aa4 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Aa4 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=eaf8bb99e0508761e83d8588952d1920 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.scu 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key eaf8bb99e0508761e83d8588952d1920 1 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 eaf8bb99e0508761e83d8588952d1920 1 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=eaf8bb99e0508761e83d8588952d1920 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:10:30.088 12:34:55 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.scu 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.scu 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.scu 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c37f9fda19b3923a38266c2e278a2f529c38c4f2067d5ba8588fd195efe0f4a5 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.egO 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c37f9fda19b3923a38266c2e278a2f529c38c4f2067d5ba8588fd195efe0f4a5 3 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c37f9fda19b3923a38266c2e278a2f529c38c4f2067d5ba8588fd195efe0f4a5 3 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c37f9fda19b3923a38266c2e278a2f529c38c4f2067d5ba8588fd195efe0f4a5 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.egO 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.egO 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.egO 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 69535 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69535 ']' 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:30.088 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.653 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:30.653 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:30.653 12:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 69572 /var/tmp/host.sock 00:10:30.653 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 69572 ']' 00:10:30.653 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:30.653 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:30.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:30.653 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:30.653 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:30.653 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.653 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:30.653 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:10:30.653 12:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:10:30.653 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.653 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.653 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.911 12:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:30.911 12:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oNo 00:10:30.911 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:30.911 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.911 12:34:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:30.911 12:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.oNo 00:10:30.911 12:34:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.oNo 00:10:31.169 12:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.fXk ]] 00:10:31.169 12:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fXk 00:10:31.169 12:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.169 12:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.169 12:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.169 12:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fXk 00:10:31.169 12:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.fXk 00:10:31.428 12:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:31.428 12:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.CnX 00:10:31.428 12:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.428 12:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.428 12:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.428 12:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.CnX 00:10:31.428 12:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.CnX 00:10:31.686 12:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.O6C ]] 00:10:31.686 12:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.O6C 00:10:31.686 12:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.686 12:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.686 12:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.686 12:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.O6C 00:10:31.686 12:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.O6C 00:10:31.944 12:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:31.944 12:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Aa4 00:10:31.944 12:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.944 12:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.944 12:34:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.944 12:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Aa4 00:10:31.944 12:34:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Aa4 00:10:32.202 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.scu ]] 00:10:32.202 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.scu 00:10:32.202 12:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.202 12:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.202 12:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.202 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.scu 00:10:32.202 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.scu 00:10:32.460 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:10:32.460 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.egO 00:10:32.460 12:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.460 12:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.460 12:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.460 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.egO 00:10:32.460 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.egO 00:10:32.718 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:10:32.718 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:10:32.718 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:32.718 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:32.718 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:32.718 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:32.976 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:10:32.976 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:32.976 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:32.976 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:32.976 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:32.976 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:32.976 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.976 12:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.976 12:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.976 12:34:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.976 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:32.976 12:34:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:33.234 00:10:33.234 12:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:33.234 12:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:33.234 12:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:33.491 12:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:33.491 12:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:33.492 12:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.492 12:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.492 12:34:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.492 12:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:33.492 { 00:10:33.492 "cntlid": 1, 00:10:33.492 "qid": 0, 00:10:33.492 "state": "enabled", 00:10:33.492 "thread": "nvmf_tgt_poll_group_000", 00:10:33.492 "listen_address": { 00:10:33.492 "trtype": "TCP", 00:10:33.492 "adrfam": "IPv4", 00:10:33.492 "traddr": "10.0.0.2", 00:10:33.492 "trsvcid": "4420" 00:10:33.492 }, 00:10:33.492 "peer_address": { 00:10:33.492 "trtype": "TCP", 00:10:33.492 "adrfam": "IPv4", 00:10:33.492 "traddr": "10.0.0.1", 00:10:33.492 "trsvcid": "35680" 00:10:33.492 }, 00:10:33.492 "auth": { 00:10:33.492 "state": "completed", 00:10:33.492 "digest": "sha256", 00:10:33.492 "dhgroup": "null" 00:10:33.492 } 00:10:33.492 } 00:10:33.492 ]' 00:10:33.492 12:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:33.492 12:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:33.492 12:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:33.750 12:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:33.750 12:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:33.750 12:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:33.750 12:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:33.750 12:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:34.007 12:34:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:10:38.194 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.194 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:38.194 12:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.194 12:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.194 12:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.194 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:38.194 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:38.194 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:38.762 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:10:38.762 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:38.762 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:38.762 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:38.762 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:38.762 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.762 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.762 12:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.762 12:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.762 12:35:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.762 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:38.762 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:39.020 00:10:39.020 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:39.020 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:39.020 12:35:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:39.279 12:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:39.279 12:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:39.279 12:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.279 12:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.279 12:35:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.279 12:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:39.279 { 00:10:39.279 "cntlid": 3, 00:10:39.279 "qid": 0, 00:10:39.279 "state": "enabled", 00:10:39.279 "thread": "nvmf_tgt_poll_group_000", 00:10:39.279 "listen_address": { 00:10:39.279 "trtype": "TCP", 00:10:39.279 "adrfam": "IPv4", 00:10:39.279 "traddr": "10.0.0.2", 00:10:39.279 "trsvcid": "4420" 00:10:39.279 }, 00:10:39.279 "peer_address": { 00:10:39.279 "trtype": "TCP", 00:10:39.279 "adrfam": "IPv4", 00:10:39.279 "traddr": "10.0.0.1", 00:10:39.279 "trsvcid": "58362" 00:10:39.279 }, 00:10:39.279 "auth": { 00:10:39.279 "state": "completed", 00:10:39.279 "digest": "sha256", 00:10:39.279 "dhgroup": "null" 00:10:39.279 } 00:10:39.279 } 00:10:39.279 ]' 00:10:39.279 12:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:39.279 12:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:39.279 12:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:39.279 12:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:39.279 12:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:39.279 12:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.279 12:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.279 12:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.537 12:35:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.483 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.483 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:40.746 00:10:40.746 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:40.746 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:40.746 12:35:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:41.005 12:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.005 12:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.005 12:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.005 12:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.005 12:35:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.005 12:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:41.005 { 00:10:41.005 "cntlid": 5, 00:10:41.005 "qid": 0, 00:10:41.005 "state": "enabled", 00:10:41.005 "thread": "nvmf_tgt_poll_group_000", 00:10:41.005 "listen_address": { 00:10:41.005 "trtype": "TCP", 00:10:41.005 "adrfam": "IPv4", 00:10:41.005 "traddr": "10.0.0.2", 00:10:41.005 "trsvcid": "4420" 00:10:41.005 }, 00:10:41.005 "peer_address": { 00:10:41.005 "trtype": "TCP", 00:10:41.005 "adrfam": "IPv4", 00:10:41.005 "traddr": "10.0.0.1", 00:10:41.005 "trsvcid": "58392" 00:10:41.005 }, 00:10:41.005 "auth": { 00:10:41.005 "state": "completed", 00:10:41.005 "digest": "sha256", 00:10:41.005 "dhgroup": "null" 00:10:41.005 } 00:10:41.005 } 00:10:41.005 ]' 00:10:41.005 12:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:41.264 12:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:41.264 12:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:41.264 12:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:41.264 12:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:41.264 12:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.264 12:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.264 12:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.522 12:35:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:10:42.089 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.089 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:42.089 12:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.089 12:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.089 12:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.089 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:42.089 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:42.089 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:10:42.661 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:10:42.661 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:42.661 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:42.661 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:10:42.661 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:42.661 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.661 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:10:42.661 12:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:42.661 12:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.661 12:35:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:42.661 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:42.661 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:42.661 00:10:42.661 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:42.661 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:42.661 12:35:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:43.229 12:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.229 12:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.229 12:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.229 12:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.229 12:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.229 12:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:43.229 { 00:10:43.229 "cntlid": 7, 00:10:43.229 "qid": 0, 00:10:43.229 "state": "enabled", 00:10:43.229 "thread": "nvmf_tgt_poll_group_000", 00:10:43.229 "listen_address": { 00:10:43.229 "trtype": "TCP", 00:10:43.229 "adrfam": "IPv4", 00:10:43.229 "traddr": "10.0.0.2", 00:10:43.229 "trsvcid": "4420" 00:10:43.229 }, 00:10:43.229 "peer_address": { 00:10:43.229 "trtype": "TCP", 00:10:43.229 "adrfam": "IPv4", 00:10:43.229 "traddr": "10.0.0.1", 00:10:43.229 "trsvcid": "58402" 00:10:43.229 }, 00:10:43.229 "auth": { 00:10:43.229 "state": "completed", 00:10:43.229 "digest": "sha256", 00:10:43.229 "dhgroup": "null" 00:10:43.229 } 00:10:43.229 } 00:10:43.229 ]' 00:10:43.229 12:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:43.229 12:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:43.229 12:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:43.229 12:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:10:43.229 12:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:43.229 12:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.229 12:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.229 12:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.487 12:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.422 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:44.680 00:10:44.939 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:44.939 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:44.939 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:44.939 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:44.939 12:35:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:44.939 12:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.939 12:35:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.939 12:35:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.939 12:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:44.939 { 00:10:44.939 "cntlid": 9, 00:10:44.939 "qid": 0, 00:10:44.939 "state": "enabled", 00:10:44.939 "thread": "nvmf_tgt_poll_group_000", 00:10:44.939 "listen_address": { 00:10:44.939 "trtype": "TCP", 00:10:44.939 "adrfam": "IPv4", 00:10:44.939 "traddr": "10.0.0.2", 00:10:44.939 "trsvcid": "4420" 00:10:44.939 }, 00:10:44.939 "peer_address": { 00:10:44.939 "trtype": "TCP", 00:10:44.939 "adrfam": "IPv4", 00:10:44.939 "traddr": "10.0.0.1", 00:10:44.939 "trsvcid": "54406" 00:10:44.939 }, 00:10:44.939 "auth": { 00:10:44.939 "state": "completed", 00:10:44.939 "digest": "sha256", 00:10:44.939 "dhgroup": "ffdhe2048" 00:10:44.939 } 00:10:44.939 } 00:10:44.939 ]' 00:10:44.939 12:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:45.195 12:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:45.195 12:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:45.196 12:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:45.196 12:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:45.196 12:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.196 12:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.196 12:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.452 12:35:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:46.448 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:47.015 00:10:47.015 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:47.015 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:47.015 12:35:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.272 12:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.272 12:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.272 12:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.272 12:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.272 12:35:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.272 12:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:47.272 { 00:10:47.272 "cntlid": 11, 00:10:47.272 "qid": 0, 00:10:47.272 "state": "enabled", 00:10:47.272 "thread": "nvmf_tgt_poll_group_000", 00:10:47.272 "listen_address": { 00:10:47.272 "trtype": "TCP", 00:10:47.272 "adrfam": "IPv4", 00:10:47.272 "traddr": "10.0.0.2", 00:10:47.272 "trsvcid": "4420" 00:10:47.272 }, 00:10:47.272 "peer_address": { 00:10:47.272 "trtype": "TCP", 00:10:47.272 "adrfam": "IPv4", 00:10:47.272 "traddr": "10.0.0.1", 00:10:47.272 "trsvcid": "54442" 00:10:47.272 }, 00:10:47.272 "auth": { 00:10:47.272 "state": "completed", 00:10:47.272 "digest": "sha256", 00:10:47.272 "dhgroup": "ffdhe2048" 00:10:47.272 } 00:10:47.272 } 00:10:47.272 ]' 00:10:47.272 12:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:47.272 12:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:47.272 12:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:47.273 12:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:47.273 12:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:47.529 12:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.529 12:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.529 12:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.787 12:35:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:10:48.354 12:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.354 12:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:48.354 12:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.354 12:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.354 12:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.354 12:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:48.354 12:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:48.354 12:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:48.613 12:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:10:48.613 12:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:48.613 12:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:48.613 12:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:48.613 12:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:48.613 12:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.613 12:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.613 12:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.613 12:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.613 12:35:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.614 12:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:48.614 12:35:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:49.180 00:10:49.180 12:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:49.180 12:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:49.180 12:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.438 12:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.438 12:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.438 12:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.438 12:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.438 12:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.438 12:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:49.438 { 00:10:49.438 "cntlid": 13, 00:10:49.438 "qid": 0, 00:10:49.438 "state": "enabled", 00:10:49.438 "thread": "nvmf_tgt_poll_group_000", 00:10:49.438 "listen_address": { 00:10:49.438 "trtype": "TCP", 00:10:49.438 "adrfam": "IPv4", 00:10:49.438 "traddr": "10.0.0.2", 00:10:49.438 "trsvcid": "4420" 00:10:49.438 }, 00:10:49.438 "peer_address": { 00:10:49.438 "trtype": "TCP", 00:10:49.438 "adrfam": "IPv4", 00:10:49.438 "traddr": "10.0.0.1", 00:10:49.438 "trsvcid": "54480" 00:10:49.438 }, 00:10:49.438 "auth": { 00:10:49.438 "state": "completed", 00:10:49.438 "digest": "sha256", 00:10:49.438 "dhgroup": "ffdhe2048" 00:10:49.438 } 00:10:49.438 } 00:10:49.438 ]' 00:10:49.438 12:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:49.438 12:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:49.438 12:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:49.438 12:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:49.438 12:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:49.438 12:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.438 12:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.439 12:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.749 12:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:10:50.370 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.370 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:50.370 12:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.370 12:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.370 12:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.370 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:50.370 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:50.370 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:10:50.629 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:10:50.629 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:50.629 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:50.629 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:10:50.629 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:50.629 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.629 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:10:50.629 12:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.629 12:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.629 12:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.629 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:50.629 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:50.888 00:10:50.888 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:50.888 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:50.888 12:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:51.147 12:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.147 12:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.147 12:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.147 12:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.147 12:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.147 12:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:51.147 { 00:10:51.147 "cntlid": 15, 00:10:51.147 "qid": 0, 00:10:51.147 "state": "enabled", 00:10:51.147 "thread": "nvmf_tgt_poll_group_000", 00:10:51.147 "listen_address": { 00:10:51.147 "trtype": "TCP", 00:10:51.147 "adrfam": "IPv4", 00:10:51.147 "traddr": "10.0.0.2", 00:10:51.147 "trsvcid": "4420" 00:10:51.147 }, 00:10:51.147 "peer_address": { 00:10:51.147 "trtype": "TCP", 00:10:51.147 "adrfam": "IPv4", 00:10:51.147 "traddr": "10.0.0.1", 00:10:51.147 "trsvcid": "54512" 00:10:51.147 }, 00:10:51.147 "auth": { 00:10:51.147 "state": "completed", 00:10:51.147 "digest": "sha256", 00:10:51.147 "dhgroup": "ffdhe2048" 00:10:51.147 } 00:10:51.147 } 00:10:51.147 ]' 00:10:51.147 12:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:51.406 12:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:51.406 12:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:51.406 12:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:51.406 12:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:51.406 12:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.406 12:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.406 12:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:51.664 12:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:10:52.231 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.231 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:52.232 12:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.232 12:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.232 12:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.232 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:10:52.232 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:52.232 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:52.232 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:52.491 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:10:52.491 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:52.491 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:52.491 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:52.491 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:10:52.491 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.491 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.491 12:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.491 12:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.491 12:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.491 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:52.491 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:53.058 00:10:53.058 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:53.058 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.058 12:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:53.315 12:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.315 12:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.316 12:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.316 12:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.316 12:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.316 12:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:53.316 { 00:10:53.316 "cntlid": 17, 00:10:53.316 "qid": 0, 00:10:53.316 "state": "enabled", 00:10:53.316 "thread": "nvmf_tgt_poll_group_000", 00:10:53.316 "listen_address": { 00:10:53.316 "trtype": "TCP", 00:10:53.316 "adrfam": "IPv4", 00:10:53.316 "traddr": "10.0.0.2", 00:10:53.316 "trsvcid": "4420" 00:10:53.316 }, 00:10:53.316 "peer_address": { 00:10:53.316 "trtype": "TCP", 00:10:53.316 "adrfam": "IPv4", 00:10:53.316 "traddr": "10.0.0.1", 00:10:53.316 "trsvcid": "54558" 00:10:53.316 }, 00:10:53.316 "auth": { 00:10:53.316 "state": "completed", 00:10:53.316 "digest": "sha256", 00:10:53.316 "dhgroup": "ffdhe3072" 00:10:53.316 } 00:10:53.316 } 00:10:53.316 ]' 00:10:53.316 12:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:53.316 12:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:53.316 12:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:53.316 12:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:53.316 12:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:53.659 12:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.660 12:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.660 12:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.660 12:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:10:54.594 12:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.594 12:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:54.594 12:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.594 12:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.594 12:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.594 12:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:54.594 12:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:54.594 12:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:54.852 12:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:10:54.852 12:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:54.852 12:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:54.852 12:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:54.852 12:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:10:54.852 12:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:54.852 12:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.852 12:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.852 12:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.852 12:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.852 12:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:54.852 12:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:55.111 00:10:55.111 12:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:55.111 12:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:55.111 12:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.369 12:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.369 12:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.369 12:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.369 12:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.369 12:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.369 12:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:55.369 { 00:10:55.369 "cntlid": 19, 00:10:55.369 "qid": 0, 00:10:55.369 "state": "enabled", 00:10:55.369 "thread": "nvmf_tgt_poll_group_000", 00:10:55.369 "listen_address": { 00:10:55.369 "trtype": "TCP", 00:10:55.369 "adrfam": "IPv4", 00:10:55.369 "traddr": "10.0.0.2", 00:10:55.369 "trsvcid": "4420" 00:10:55.369 }, 00:10:55.369 "peer_address": { 00:10:55.369 "trtype": "TCP", 00:10:55.369 "adrfam": "IPv4", 00:10:55.369 "traddr": "10.0.0.1", 00:10:55.369 "trsvcid": "48228" 00:10:55.369 }, 00:10:55.369 "auth": { 00:10:55.369 "state": "completed", 00:10:55.369 "digest": "sha256", 00:10:55.369 "dhgroup": "ffdhe3072" 00:10:55.369 } 00:10:55.369 } 00:10:55.369 ]' 00:10:55.369 12:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:55.369 12:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:55.628 12:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:55.628 12:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:55.628 12:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:55.628 12:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.628 12:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.628 12:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:55.886 12:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:10:56.453 12:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:56.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:56.453 12:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:56.453 12:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:56.453 12:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:56.453 12:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:56.453 12:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:56.453 12:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:56.453 12:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:57.023 12:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:10:57.023 12:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:57.023 12:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:57.023 12:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:57.023 12:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:10:57.023 12:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.023 12:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.023 12:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.023 12:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.023 12:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.023 12:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.023 12:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:57.280 00:10:57.280 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:57.280 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:57.280 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.539 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.539 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.539 12:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.539 12:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.539 12:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.539 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:57.539 { 00:10:57.539 "cntlid": 21, 00:10:57.539 "qid": 0, 00:10:57.539 "state": "enabled", 00:10:57.539 "thread": "nvmf_tgt_poll_group_000", 00:10:57.539 "listen_address": { 00:10:57.539 "trtype": "TCP", 00:10:57.539 "adrfam": "IPv4", 00:10:57.539 "traddr": "10.0.0.2", 00:10:57.539 "trsvcid": "4420" 00:10:57.539 }, 00:10:57.539 "peer_address": { 00:10:57.539 "trtype": "TCP", 00:10:57.539 "adrfam": "IPv4", 00:10:57.539 "traddr": "10.0.0.1", 00:10:57.539 "trsvcid": "48250" 00:10:57.539 }, 00:10:57.539 "auth": { 00:10:57.539 "state": "completed", 00:10:57.539 "digest": "sha256", 00:10:57.539 "dhgroup": "ffdhe3072" 00:10:57.539 } 00:10:57.539 } 00:10:57.539 ]' 00:10:57.539 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:57.539 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:57.539 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:57.539 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:57.539 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:57.796 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:57.796 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:57.796 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.055 12:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:10:58.619 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:58.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:58.619 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:10:58.619 12:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.619 12:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.619 12:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.619 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:10:58.619 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:58.619 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:10:58.877 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:10:58.877 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:10:58.877 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:10:58.877 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:10:58.877 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:10:58.877 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:58.877 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:10:58.877 12:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.877 12:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.877 12:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.877 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:58.877 12:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:10:59.136 00:10:59.136 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:10:59.136 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:10:59.136 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:59.700 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:59.700 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:59.700 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.700 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.700 12:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.700 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:10:59.700 { 00:10:59.700 "cntlid": 23, 00:10:59.700 "qid": 0, 00:10:59.700 "state": "enabled", 00:10:59.700 "thread": "nvmf_tgt_poll_group_000", 00:10:59.700 "listen_address": { 00:10:59.700 "trtype": "TCP", 00:10:59.700 "adrfam": "IPv4", 00:10:59.700 "traddr": "10.0.0.2", 00:10:59.700 "trsvcid": "4420" 00:10:59.700 }, 00:10:59.700 "peer_address": { 00:10:59.700 "trtype": "TCP", 00:10:59.700 "adrfam": "IPv4", 00:10:59.700 "traddr": "10.0.0.1", 00:10:59.700 "trsvcid": "48274" 00:10:59.700 }, 00:10:59.700 "auth": { 00:10:59.700 "state": "completed", 00:10:59.700 "digest": "sha256", 00:10:59.700 "dhgroup": "ffdhe3072" 00:10:59.700 } 00:10:59.700 } 00:10:59.700 ]' 00:10:59.700 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:10:59.700 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:59.700 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:10:59.700 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:59.700 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:10:59.700 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:59.700 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:59.700 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:59.957 12:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:11:00.518 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:00.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:00.518 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:00.518 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.518 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.518 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.518 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:00.518 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:00.518 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:00.518 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:00.774 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:11:00.774 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:00.774 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:00.774 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:00.774 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:00.774 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:00.774 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.774 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.774 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.774 12:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.774 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:00.774 12:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:01.338 00:11:01.339 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:01.339 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:01.339 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:01.596 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:01.596 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:01.596 12:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.596 12:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.596 12:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:01.596 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:01.596 { 00:11:01.596 "cntlid": 25, 00:11:01.596 "qid": 0, 00:11:01.596 "state": "enabled", 00:11:01.596 "thread": "nvmf_tgt_poll_group_000", 00:11:01.596 "listen_address": { 00:11:01.596 "trtype": "TCP", 00:11:01.596 "adrfam": "IPv4", 00:11:01.596 "traddr": "10.0.0.2", 00:11:01.596 "trsvcid": "4420" 00:11:01.596 }, 00:11:01.596 "peer_address": { 00:11:01.596 "trtype": "TCP", 00:11:01.596 "adrfam": "IPv4", 00:11:01.596 "traddr": "10.0.0.1", 00:11:01.596 "trsvcid": "48298" 00:11:01.596 }, 00:11:01.596 "auth": { 00:11:01.596 "state": "completed", 00:11:01.596 "digest": "sha256", 00:11:01.596 "dhgroup": "ffdhe4096" 00:11:01.596 } 00:11:01.596 } 00:11:01.596 ]' 00:11:01.596 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:01.596 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:01.596 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:01.596 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:01.597 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:01.597 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:01.597 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:01.597 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.163 12:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:11:02.728 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:02.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:02.728 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:02.728 12:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.728 12:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.728 12:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.728 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:02.728 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:02.728 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:02.986 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:11:02.986 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:02.986 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:02.986 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:02.986 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:02.986 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:02.986 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.986 12:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.986 12:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.986 12:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.986 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:02.986 12:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:03.245 00:11:03.245 12:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:03.245 12:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:03.245 12:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:03.504 12:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:03.504 12:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:03.504 12:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.504 12:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.504 12:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.504 12:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:03.504 { 00:11:03.504 "cntlid": 27, 00:11:03.504 "qid": 0, 00:11:03.504 "state": "enabled", 00:11:03.504 "thread": "nvmf_tgt_poll_group_000", 00:11:03.504 "listen_address": { 00:11:03.504 "trtype": "TCP", 00:11:03.504 "adrfam": "IPv4", 00:11:03.504 "traddr": "10.0.0.2", 00:11:03.504 "trsvcid": "4420" 00:11:03.504 }, 00:11:03.504 "peer_address": { 00:11:03.504 "trtype": "TCP", 00:11:03.504 "adrfam": "IPv4", 00:11:03.504 "traddr": "10.0.0.1", 00:11:03.504 "trsvcid": "48316" 00:11:03.504 }, 00:11:03.504 "auth": { 00:11:03.504 "state": "completed", 00:11:03.504 "digest": "sha256", 00:11:03.504 "dhgroup": "ffdhe4096" 00:11:03.504 } 00:11:03.504 } 00:11:03.504 ]' 00:11:03.504 12:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:03.761 12:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:03.761 12:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:03.761 12:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:03.761 12:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:03.761 12:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:03.761 12:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:03.761 12:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:04.019 12:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:11:04.633 12:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:04.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:04.633 12:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:04.633 12:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.633 12:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.633 12:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.633 12:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:04.633 12:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:04.633 12:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:04.890 12:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:11:04.890 12:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:04.890 12:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:04.890 12:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:04.890 12:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:04.890 12:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.890 12:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.890 12:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.890 12:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.890 12:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.890 12:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:04.890 12:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:05.454 00:11:05.454 12:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:05.454 12:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:05.454 12:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:05.454 12:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:05.454 12:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:05.454 12:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.454 12:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.710 12:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.710 12:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:05.710 { 00:11:05.710 "cntlid": 29, 00:11:05.710 "qid": 0, 00:11:05.710 "state": "enabled", 00:11:05.710 "thread": "nvmf_tgt_poll_group_000", 00:11:05.710 "listen_address": { 00:11:05.710 "trtype": "TCP", 00:11:05.710 "adrfam": "IPv4", 00:11:05.710 "traddr": "10.0.0.2", 00:11:05.710 "trsvcid": "4420" 00:11:05.710 }, 00:11:05.710 "peer_address": { 00:11:05.710 "trtype": "TCP", 00:11:05.710 "adrfam": "IPv4", 00:11:05.710 "traddr": "10.0.0.1", 00:11:05.710 "trsvcid": "51028" 00:11:05.710 }, 00:11:05.710 "auth": { 00:11:05.710 "state": "completed", 00:11:05.710 "digest": "sha256", 00:11:05.710 "dhgroup": "ffdhe4096" 00:11:05.710 } 00:11:05.710 } 00:11:05.710 ]' 00:11:05.710 12:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:05.710 12:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:05.710 12:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:05.710 12:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:05.710 12:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:05.710 12:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.710 12:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.710 12:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.967 12:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:11:06.898 12:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.898 12:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:06.898 12:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.898 12:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.898 12:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.898 12:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:06.898 12:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:06.898 12:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:07.155 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:11:07.155 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:07.155 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:07.155 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:07.155 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:07.155 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:07.155 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:11:07.155 12:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.155 12:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.155 12:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.155 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:07.155 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:07.413 00:11:07.413 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:07.413 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.413 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:07.670 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.670 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.670 12:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.670 12:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.671 12:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.671 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:07.671 { 00:11:07.671 "cntlid": 31, 00:11:07.671 "qid": 0, 00:11:07.671 "state": "enabled", 00:11:07.671 "thread": "nvmf_tgt_poll_group_000", 00:11:07.671 "listen_address": { 00:11:07.671 "trtype": "TCP", 00:11:07.671 "adrfam": "IPv4", 00:11:07.671 "traddr": "10.0.0.2", 00:11:07.671 "trsvcid": "4420" 00:11:07.671 }, 00:11:07.671 "peer_address": { 00:11:07.671 "trtype": "TCP", 00:11:07.671 "adrfam": "IPv4", 00:11:07.671 "traddr": "10.0.0.1", 00:11:07.671 "trsvcid": "51058" 00:11:07.671 }, 00:11:07.671 "auth": { 00:11:07.671 "state": "completed", 00:11:07.671 "digest": "sha256", 00:11:07.671 "dhgroup": "ffdhe4096" 00:11:07.671 } 00:11:07.671 } 00:11:07.671 ]' 00:11:07.671 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:07.671 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.671 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:07.945 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:07.945 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:07.945 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.945 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.945 12:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.204 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:11:08.769 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.769 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:08.769 12:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.769 12:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.769 12:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.769 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:08.769 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:08.769 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:08.770 12:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:09.028 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:11:09.028 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:09.028 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:09.028 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:09.028 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:09.028 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:09.028 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.028 12:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.028 12:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.028 12:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.028 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.028 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:09.592 00:11:09.592 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:09.592 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.592 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:09.850 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:09.850 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:09.850 12:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.850 12:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.850 12:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.850 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:09.850 { 00:11:09.850 "cntlid": 33, 00:11:09.850 "qid": 0, 00:11:09.850 "state": "enabled", 00:11:09.850 "thread": "nvmf_tgt_poll_group_000", 00:11:09.850 "listen_address": { 00:11:09.850 "trtype": "TCP", 00:11:09.850 "adrfam": "IPv4", 00:11:09.850 "traddr": "10.0.0.2", 00:11:09.850 "trsvcid": "4420" 00:11:09.850 }, 00:11:09.850 "peer_address": { 00:11:09.850 "trtype": "TCP", 00:11:09.850 "adrfam": "IPv4", 00:11:09.850 "traddr": "10.0.0.1", 00:11:09.850 "trsvcid": "51078" 00:11:09.850 }, 00:11:09.850 "auth": { 00:11:09.850 "state": "completed", 00:11:09.850 "digest": "sha256", 00:11:09.850 "dhgroup": "ffdhe6144" 00:11:09.850 } 00:11:09.850 } 00:11:09.850 ]' 00:11:09.850 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:09.850 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:09.850 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:09.850 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:09.850 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:10.107 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.107 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.107 12:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.366 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:11:10.931 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:10.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:10.931 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:10.931 12:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.931 12:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.931 12:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.931 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:10.931 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:10.931 12:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:11.200 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:11:11.200 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:11.200 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:11.200 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:11.200 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:11.200 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.200 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.200 12:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:11.200 12:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.200 12:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:11.200 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.200 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:11.764 00:11:11.764 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:11.764 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.764 12:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:12.023 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.023 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.023 12:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:12.023 12:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.023 12:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:12.023 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:12.023 { 00:11:12.023 "cntlid": 35, 00:11:12.023 "qid": 0, 00:11:12.023 "state": "enabled", 00:11:12.023 "thread": "nvmf_tgt_poll_group_000", 00:11:12.023 "listen_address": { 00:11:12.023 "trtype": "TCP", 00:11:12.023 "adrfam": "IPv4", 00:11:12.023 "traddr": "10.0.0.2", 00:11:12.023 "trsvcid": "4420" 00:11:12.023 }, 00:11:12.023 "peer_address": { 00:11:12.023 "trtype": "TCP", 00:11:12.023 "adrfam": "IPv4", 00:11:12.023 "traddr": "10.0.0.1", 00:11:12.023 "trsvcid": "51100" 00:11:12.023 }, 00:11:12.023 "auth": { 00:11:12.023 "state": "completed", 00:11:12.023 "digest": "sha256", 00:11:12.023 "dhgroup": "ffdhe6144" 00:11:12.023 } 00:11:12.023 } 00:11:12.023 ]' 00:11:12.023 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:12.023 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:12.023 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:12.281 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:12.281 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:12.281 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.281 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.281 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:12.538 12:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:13.473 12:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.040 00:11:14.040 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:14.040 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.040 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:14.300 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.300 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.300 12:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.300 12:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.300 12:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.300 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:14.300 { 00:11:14.300 "cntlid": 37, 00:11:14.300 "qid": 0, 00:11:14.300 "state": "enabled", 00:11:14.300 "thread": "nvmf_tgt_poll_group_000", 00:11:14.300 "listen_address": { 00:11:14.300 "trtype": "TCP", 00:11:14.300 "adrfam": "IPv4", 00:11:14.300 "traddr": "10.0.0.2", 00:11:14.300 "trsvcid": "4420" 00:11:14.300 }, 00:11:14.300 "peer_address": { 00:11:14.300 "trtype": "TCP", 00:11:14.300 "adrfam": "IPv4", 00:11:14.300 "traddr": "10.0.0.1", 00:11:14.300 "trsvcid": "51914" 00:11:14.300 }, 00:11:14.300 "auth": { 00:11:14.300 "state": "completed", 00:11:14.300 "digest": "sha256", 00:11:14.300 "dhgroup": "ffdhe6144" 00:11:14.300 } 00:11:14.300 } 00:11:14.300 ]' 00:11:14.300 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:14.300 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:14.300 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:14.559 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:14.559 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:14.559 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.559 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.559 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.818 12:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:11:15.411 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.411 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:15.411 12:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.411 12:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.411 12:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.411 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:15.411 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:15.411 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:15.669 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:11:15.669 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:15.669 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:15.669 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:15.669 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:15.669 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.669 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:11:15.669 12:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.669 12:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.669 12:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.669 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:15.669 12:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:16.235 00:11:16.235 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:16.235 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:16.235 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.492 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.492 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.492 12:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.492 12:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.492 12:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.492 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:16.492 { 00:11:16.492 "cntlid": 39, 00:11:16.492 "qid": 0, 00:11:16.492 "state": "enabled", 00:11:16.492 "thread": "nvmf_tgt_poll_group_000", 00:11:16.492 "listen_address": { 00:11:16.492 "trtype": "TCP", 00:11:16.492 "adrfam": "IPv4", 00:11:16.492 "traddr": "10.0.0.2", 00:11:16.492 "trsvcid": "4420" 00:11:16.492 }, 00:11:16.492 "peer_address": { 00:11:16.492 "trtype": "TCP", 00:11:16.492 "adrfam": "IPv4", 00:11:16.492 "traddr": "10.0.0.1", 00:11:16.492 "trsvcid": "51936" 00:11:16.492 }, 00:11:16.492 "auth": { 00:11:16.492 "state": "completed", 00:11:16.492 "digest": "sha256", 00:11:16.492 "dhgroup": "ffdhe6144" 00:11:16.492 } 00:11:16.492 } 00:11:16.492 ]' 00:11:16.492 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:16.493 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:16.493 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:16.493 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:16.493 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:16.493 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.493 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.493 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.059 12:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:11:17.626 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.626 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:17.626 12:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.626 12:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.626 12:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.626 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:17.626 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:17.626 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:17.626 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:17.884 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:11:17.884 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:17.884 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:17.884 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:17.884 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:17.884 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.884 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.884 12:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.884 12:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.884 12:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.884 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:17.884 12:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.450 00:11:18.450 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:18.450 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.450 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:19.019 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:19.019 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:19.019 12:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.019 12:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.019 12:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.019 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:19.019 { 00:11:19.019 "cntlid": 41, 00:11:19.019 "qid": 0, 00:11:19.019 "state": "enabled", 00:11:19.019 "thread": "nvmf_tgt_poll_group_000", 00:11:19.019 "listen_address": { 00:11:19.019 "trtype": "TCP", 00:11:19.019 "adrfam": "IPv4", 00:11:19.019 "traddr": "10.0.0.2", 00:11:19.019 "trsvcid": "4420" 00:11:19.019 }, 00:11:19.019 "peer_address": { 00:11:19.019 "trtype": "TCP", 00:11:19.019 "adrfam": "IPv4", 00:11:19.019 "traddr": "10.0.0.1", 00:11:19.019 "trsvcid": "51978" 00:11:19.019 }, 00:11:19.019 "auth": { 00:11:19.019 "state": "completed", 00:11:19.019 "digest": "sha256", 00:11:19.019 "dhgroup": "ffdhe8192" 00:11:19.019 } 00:11:19.019 } 00:11:19.019 ]' 00:11:19.019 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:19.019 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:19.019 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:19.019 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:19.019 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:19.019 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:19.019 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:19.019 12:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.276 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:11:19.843 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.843 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:19.843 12:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.843 12:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.843 12:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.843 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:19.843 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:19.843 12:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:20.101 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:11:20.101 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:20.101 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:20.101 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:20.101 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:20.101 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.101 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.101 12:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.101 12:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.101 12:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.101 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.101 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.034 00:11:21.034 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:21.034 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:21.034 12:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.034 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.034 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.034 12:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.034 12:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.034 12:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.034 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:21.034 { 00:11:21.034 "cntlid": 43, 00:11:21.034 "qid": 0, 00:11:21.034 "state": "enabled", 00:11:21.034 "thread": "nvmf_tgt_poll_group_000", 00:11:21.034 "listen_address": { 00:11:21.034 "trtype": "TCP", 00:11:21.034 "adrfam": "IPv4", 00:11:21.034 "traddr": "10.0.0.2", 00:11:21.034 "trsvcid": "4420" 00:11:21.034 }, 00:11:21.034 "peer_address": { 00:11:21.034 "trtype": "TCP", 00:11:21.034 "adrfam": "IPv4", 00:11:21.034 "traddr": "10.0.0.1", 00:11:21.034 "trsvcid": "51984" 00:11:21.034 }, 00:11:21.034 "auth": { 00:11:21.034 "state": "completed", 00:11:21.034 "digest": "sha256", 00:11:21.034 "dhgroup": "ffdhe8192" 00:11:21.034 } 00:11:21.034 } 00:11:21.034 ]' 00:11:21.034 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:21.292 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.292 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:21.292 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:21.292 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:21.292 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.292 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.292 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.551 12:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:11:22.118 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.118 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:22.118 12:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.118 12:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.118 12:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.118 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:22.118 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:22.118 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:22.376 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:11:22.376 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:22.376 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:22.376 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:22.376 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:22.376 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.376 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.376 12:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.376 12:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.376 12:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.376 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.376 12:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.993 00:11:23.251 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:23.251 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.251 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:23.510 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.510 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.510 12:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.510 12:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.510 12:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.510 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:23.510 { 00:11:23.510 "cntlid": 45, 00:11:23.510 "qid": 0, 00:11:23.510 "state": "enabled", 00:11:23.510 "thread": "nvmf_tgt_poll_group_000", 00:11:23.510 "listen_address": { 00:11:23.510 "trtype": "TCP", 00:11:23.510 "adrfam": "IPv4", 00:11:23.510 "traddr": "10.0.0.2", 00:11:23.510 "trsvcid": "4420" 00:11:23.510 }, 00:11:23.510 "peer_address": { 00:11:23.510 "trtype": "TCP", 00:11:23.510 "adrfam": "IPv4", 00:11:23.510 "traddr": "10.0.0.1", 00:11:23.510 "trsvcid": "52002" 00:11:23.510 }, 00:11:23.510 "auth": { 00:11:23.510 "state": "completed", 00:11:23.510 "digest": "sha256", 00:11:23.510 "dhgroup": "ffdhe8192" 00:11:23.510 } 00:11:23.510 } 00:11:23.510 ]' 00:11:23.510 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:23.510 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:23.510 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:23.510 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:23.510 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:23.510 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.510 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.510 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.076 12:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:11:24.642 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.642 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:24.642 12:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.642 12:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.642 12:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.642 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:24.642 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:24.642 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:24.900 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:11:24.900 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:24.901 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:11:24.901 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:11:24.901 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:24.901 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.901 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:11:24.901 12:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.901 12:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.901 12:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.901 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:24.901 12:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:25.465 00:11:25.465 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:25.465 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.465 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:25.722 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.722 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.722 12:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.722 12:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.722 12:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.722 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:25.722 { 00:11:25.722 "cntlid": 47, 00:11:25.722 "qid": 0, 00:11:25.722 "state": "enabled", 00:11:25.722 "thread": "nvmf_tgt_poll_group_000", 00:11:25.722 "listen_address": { 00:11:25.722 "trtype": "TCP", 00:11:25.722 "adrfam": "IPv4", 00:11:25.722 "traddr": "10.0.0.2", 00:11:25.722 "trsvcid": "4420" 00:11:25.722 }, 00:11:25.722 "peer_address": { 00:11:25.722 "trtype": "TCP", 00:11:25.722 "adrfam": "IPv4", 00:11:25.722 "traddr": "10.0.0.1", 00:11:25.722 "trsvcid": "35612" 00:11:25.722 }, 00:11:25.722 "auth": { 00:11:25.722 "state": "completed", 00:11:25.722 "digest": "sha256", 00:11:25.722 "dhgroup": "ffdhe8192" 00:11:25.722 } 00:11:25.722 } 00:11:25.722 ]' 00:11:25.722 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:25.722 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:25.722 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:26.022 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:26.022 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:26.022 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.022 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.022 12:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.280 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:11:26.851 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.851 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:26.851 12:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.851 12:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.851 12:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.851 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:11:26.851 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:26.851 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:26.851 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:26.851 12:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:27.109 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:11:27.109 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:27.109 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:27.109 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:27.109 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:27.109 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.109 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.109 12:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.109 12:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.109 12:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.109 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.109 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.368 00:11:27.368 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:27.368 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:27.368 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.945 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.945 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.945 12:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.945 12:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.945 12:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.945 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:27.945 { 00:11:27.945 "cntlid": 49, 00:11:27.945 "qid": 0, 00:11:27.945 "state": "enabled", 00:11:27.945 "thread": "nvmf_tgt_poll_group_000", 00:11:27.945 "listen_address": { 00:11:27.945 "trtype": "TCP", 00:11:27.945 "adrfam": "IPv4", 00:11:27.945 "traddr": "10.0.0.2", 00:11:27.945 "trsvcid": "4420" 00:11:27.945 }, 00:11:27.945 "peer_address": { 00:11:27.945 "trtype": "TCP", 00:11:27.945 "adrfam": "IPv4", 00:11:27.945 "traddr": "10.0.0.1", 00:11:27.945 "trsvcid": "35630" 00:11:27.945 }, 00:11:27.945 "auth": { 00:11:27.945 "state": "completed", 00:11:27.945 "digest": "sha384", 00:11:27.945 "dhgroup": "null" 00:11:27.945 } 00:11:27.945 } 00:11:27.945 ]' 00:11:27.945 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:27.945 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:27.945 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:27.945 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:27.945 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:27.945 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.945 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.945 12:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.208 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:11:29.143 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.143 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:29.143 12:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.143 12:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.143 12:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.143 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:29.143 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:29.143 12:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:29.143 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:11:29.143 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:29.143 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:29.143 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:29.143 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:29.143 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.143 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.143 12:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.143 12:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.143 12:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.143 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.143 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.710 00:11:29.710 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:29.710 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.710 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:29.969 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.969 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.969 12:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.969 12:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.969 12:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.969 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:29.969 { 00:11:29.969 "cntlid": 51, 00:11:29.969 "qid": 0, 00:11:29.969 "state": "enabled", 00:11:29.969 "thread": "nvmf_tgt_poll_group_000", 00:11:29.969 "listen_address": { 00:11:29.969 "trtype": "TCP", 00:11:29.969 "adrfam": "IPv4", 00:11:29.969 "traddr": "10.0.0.2", 00:11:29.969 "trsvcid": "4420" 00:11:29.969 }, 00:11:29.969 "peer_address": { 00:11:29.969 "trtype": "TCP", 00:11:29.969 "adrfam": "IPv4", 00:11:29.969 "traddr": "10.0.0.1", 00:11:29.969 "trsvcid": "35660" 00:11:29.969 }, 00:11:29.969 "auth": { 00:11:29.969 "state": "completed", 00:11:29.969 "digest": "sha384", 00:11:29.969 "dhgroup": "null" 00:11:29.969 } 00:11:29.969 } 00:11:29.969 ]' 00:11:29.969 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:29.969 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:29.969 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:29.969 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:29.969 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:29.969 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.969 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.969 12:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.227 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:11:31.160 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.160 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:31.160 12:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.160 12:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.160 12:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.160 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:31.160 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:31.160 12:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:31.160 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:11:31.160 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:31.160 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:31.160 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:31.160 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:31.160 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.160 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.160 12:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.160 12:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.160 12:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.160 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.160 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.723 00:11:31.723 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:31.723 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.723 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:31.981 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.981 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.981 12:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.981 12:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.981 12:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.981 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:31.981 { 00:11:31.981 "cntlid": 53, 00:11:31.981 "qid": 0, 00:11:31.981 "state": "enabled", 00:11:31.981 "thread": "nvmf_tgt_poll_group_000", 00:11:31.981 "listen_address": { 00:11:31.981 "trtype": "TCP", 00:11:31.981 "adrfam": "IPv4", 00:11:31.981 "traddr": "10.0.0.2", 00:11:31.981 "trsvcid": "4420" 00:11:31.981 }, 00:11:31.981 "peer_address": { 00:11:31.981 "trtype": "TCP", 00:11:31.981 "adrfam": "IPv4", 00:11:31.981 "traddr": "10.0.0.1", 00:11:31.981 "trsvcid": "35684" 00:11:31.981 }, 00:11:31.981 "auth": { 00:11:31.981 "state": "completed", 00:11:31.981 "digest": "sha384", 00:11:31.981 "dhgroup": "null" 00:11:31.981 } 00:11:31.981 } 00:11:31.981 ]' 00:11:31.981 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:31.981 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:31.981 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:31.981 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:31.981 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:31.981 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.981 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.981 12:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.239 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:11:33.172 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.172 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:33.172 12:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.172 12:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.172 12:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.172 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:33.172 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:33.172 12:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:33.172 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:11:33.172 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:33.172 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:33.172 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:11:33.172 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:33.172 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.172 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:11:33.172 12:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.172 12:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.172 12:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.172 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:33.172 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:33.739 00:11:33.739 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:33.739 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.739 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:33.739 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.739 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.739 12:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.739 12:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.739 12:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.739 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:33.739 { 00:11:33.739 "cntlid": 55, 00:11:33.739 "qid": 0, 00:11:33.739 "state": "enabled", 00:11:33.739 "thread": "nvmf_tgt_poll_group_000", 00:11:33.739 "listen_address": { 00:11:33.739 "trtype": "TCP", 00:11:33.739 "adrfam": "IPv4", 00:11:33.739 "traddr": "10.0.0.2", 00:11:33.739 "trsvcid": "4420" 00:11:33.739 }, 00:11:33.739 "peer_address": { 00:11:33.739 "trtype": "TCP", 00:11:33.739 "adrfam": "IPv4", 00:11:33.739 "traddr": "10.0.0.1", 00:11:33.739 "trsvcid": "60666" 00:11:33.739 }, 00:11:33.739 "auth": { 00:11:33.739 "state": "completed", 00:11:33.739 "digest": "sha384", 00:11:33.739 "dhgroup": "null" 00:11:33.739 } 00:11:33.739 } 00:11:33.739 ]' 00:11:33.998 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:33.998 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:33.998 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:33.998 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:11:33.998 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:33.998 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.998 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.998 12:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.256 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:11:34.822 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.822 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:34.822 12:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.822 12:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.822 12:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.822 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:34.822 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:34.822 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:34.822 12:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:35.080 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:11:35.080 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:35.080 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:35.080 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:35.080 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:35.080 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.080 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.080 12:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.080 12:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.080 12:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.080 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.080 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.338 00:11:35.338 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:35.338 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.338 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:35.595 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.595 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.595 12:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.595 12:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.595 12:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.595 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:35.595 { 00:11:35.595 "cntlid": 57, 00:11:35.595 "qid": 0, 00:11:35.595 "state": "enabled", 00:11:35.596 "thread": "nvmf_tgt_poll_group_000", 00:11:35.596 "listen_address": { 00:11:35.596 "trtype": "TCP", 00:11:35.596 "adrfam": "IPv4", 00:11:35.596 "traddr": "10.0.0.2", 00:11:35.596 "trsvcid": "4420" 00:11:35.596 }, 00:11:35.596 "peer_address": { 00:11:35.596 "trtype": "TCP", 00:11:35.596 "adrfam": "IPv4", 00:11:35.596 "traddr": "10.0.0.1", 00:11:35.596 "trsvcid": "60686" 00:11:35.596 }, 00:11:35.596 "auth": { 00:11:35.596 "state": "completed", 00:11:35.596 "digest": "sha384", 00:11:35.596 "dhgroup": "ffdhe2048" 00:11:35.596 } 00:11:35.596 } 00:11:35.596 ]' 00:11:35.596 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:35.596 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:35.596 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:35.854 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:35.854 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:35.854 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.854 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.854 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.113 12:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:11:36.679 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.679 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:36.679 12:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.679 12:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.679 12:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.679 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:36.679 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:36.679 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:36.937 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:11:36.937 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:36.937 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:36.938 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:36.938 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:36.938 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.938 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.938 12:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.938 12:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.938 12:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.938 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.938 12:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.195 00:11:37.195 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:37.195 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:37.195 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.453 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.453 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.453 12:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:37.453 12:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.453 12:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:37.453 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:37.453 { 00:11:37.453 "cntlid": 59, 00:11:37.453 "qid": 0, 00:11:37.453 "state": "enabled", 00:11:37.453 "thread": "nvmf_tgt_poll_group_000", 00:11:37.453 "listen_address": { 00:11:37.453 "trtype": "TCP", 00:11:37.453 "adrfam": "IPv4", 00:11:37.453 "traddr": "10.0.0.2", 00:11:37.453 "trsvcid": "4420" 00:11:37.453 }, 00:11:37.453 "peer_address": { 00:11:37.453 "trtype": "TCP", 00:11:37.453 "adrfam": "IPv4", 00:11:37.453 "traddr": "10.0.0.1", 00:11:37.453 "trsvcid": "60710" 00:11:37.453 }, 00:11:37.453 "auth": { 00:11:37.453 "state": "completed", 00:11:37.453 "digest": "sha384", 00:11:37.453 "dhgroup": "ffdhe2048" 00:11:37.453 } 00:11:37.453 } 00:11:37.453 ]' 00:11:37.453 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:37.712 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:37.712 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:37.712 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:37.712 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:37.712 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.712 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.712 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.970 12:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:11:38.536 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.536 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:38.536 12:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.536 12:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.536 12:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.536 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:38.536 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:38.536 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:38.793 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:11:38.793 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:38.793 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:38.793 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:38.793 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:38.793 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.793 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.793 12:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.793 12:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.793 12:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.793 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.793 12:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.050 00:11:39.050 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:39.050 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:39.050 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.307 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.307 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.307 12:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.307 12:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.307 12:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.307 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:39.307 { 00:11:39.307 "cntlid": 61, 00:11:39.307 "qid": 0, 00:11:39.307 "state": "enabled", 00:11:39.307 "thread": "nvmf_tgt_poll_group_000", 00:11:39.307 "listen_address": { 00:11:39.307 "trtype": "TCP", 00:11:39.307 "adrfam": "IPv4", 00:11:39.307 "traddr": "10.0.0.2", 00:11:39.307 "trsvcid": "4420" 00:11:39.307 }, 00:11:39.307 "peer_address": { 00:11:39.307 "trtype": "TCP", 00:11:39.307 "adrfam": "IPv4", 00:11:39.307 "traddr": "10.0.0.1", 00:11:39.307 "trsvcid": "60718" 00:11:39.307 }, 00:11:39.307 "auth": { 00:11:39.307 "state": "completed", 00:11:39.307 "digest": "sha384", 00:11:39.307 "dhgroup": "ffdhe2048" 00:11:39.307 } 00:11:39.307 } 00:11:39.307 ]' 00:11:39.307 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:39.565 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:39.565 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:39.565 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:39.565 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:39.565 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.565 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.565 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.824 12:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:11:40.399 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.399 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:40.399 12:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.399 12:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.399 12:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.399 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:40.399 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:40.399 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:11:40.657 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:11:40.657 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:40.657 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:40.657 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:11:40.657 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:40.657 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.657 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:11:40.657 12:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:40.657 12:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.657 12:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:40.657 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:40.657 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:40.915 00:11:40.915 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:40.915 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:40.915 12:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.480 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.480 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.480 12:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:41.480 12:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.480 12:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:41.480 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:41.480 { 00:11:41.480 "cntlid": 63, 00:11:41.480 "qid": 0, 00:11:41.480 "state": "enabled", 00:11:41.480 "thread": "nvmf_tgt_poll_group_000", 00:11:41.480 "listen_address": { 00:11:41.480 "trtype": "TCP", 00:11:41.480 "adrfam": "IPv4", 00:11:41.480 "traddr": "10.0.0.2", 00:11:41.480 "trsvcid": "4420" 00:11:41.480 }, 00:11:41.480 "peer_address": { 00:11:41.480 "trtype": "TCP", 00:11:41.480 "adrfam": "IPv4", 00:11:41.480 "traddr": "10.0.0.1", 00:11:41.480 "trsvcid": "60752" 00:11:41.480 }, 00:11:41.480 "auth": { 00:11:41.480 "state": "completed", 00:11:41.480 "digest": "sha384", 00:11:41.480 "dhgroup": "ffdhe2048" 00:11:41.480 } 00:11:41.480 } 00:11:41.480 ]' 00:11:41.480 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:41.480 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:41.480 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:41.480 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:41.480 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:41.480 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.480 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.480 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.738 12:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:11:42.303 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.303 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:42.303 12:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.303 12:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.303 12:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.303 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:42.303 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:42.303 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:42.303 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:42.561 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:11:42.561 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:42.561 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:42.561 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:42.561 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:42.561 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.561 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.561 12:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:42.561 12:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.561 12:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:42.561 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.561 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.818 00:11:43.074 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:43.074 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:43.074 12:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.331 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.331 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.331 12:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:43.331 12:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.331 12:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:43.331 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:43.331 { 00:11:43.331 "cntlid": 65, 00:11:43.331 "qid": 0, 00:11:43.331 "state": "enabled", 00:11:43.331 "thread": "nvmf_tgt_poll_group_000", 00:11:43.331 "listen_address": { 00:11:43.331 "trtype": "TCP", 00:11:43.331 "adrfam": "IPv4", 00:11:43.331 "traddr": "10.0.0.2", 00:11:43.331 "trsvcid": "4420" 00:11:43.331 }, 00:11:43.331 "peer_address": { 00:11:43.331 "trtype": "TCP", 00:11:43.331 "adrfam": "IPv4", 00:11:43.331 "traddr": "10.0.0.1", 00:11:43.331 "trsvcid": "60786" 00:11:43.331 }, 00:11:43.331 "auth": { 00:11:43.331 "state": "completed", 00:11:43.331 "digest": "sha384", 00:11:43.331 "dhgroup": "ffdhe3072" 00:11:43.331 } 00:11:43.331 } 00:11:43.331 ]' 00:11:43.331 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:43.331 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:43.331 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:43.331 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:43.331 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:43.331 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.331 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.331 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:43.589 12:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.521 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.086 00:11:45.086 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:45.086 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.086 12:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:45.344 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.344 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.344 12:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:45.344 12:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.344 12:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:45.344 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:45.344 { 00:11:45.344 "cntlid": 67, 00:11:45.344 "qid": 0, 00:11:45.344 "state": "enabled", 00:11:45.344 "thread": "nvmf_tgt_poll_group_000", 00:11:45.344 "listen_address": { 00:11:45.344 "trtype": "TCP", 00:11:45.344 "adrfam": "IPv4", 00:11:45.344 "traddr": "10.0.0.2", 00:11:45.344 "trsvcid": "4420" 00:11:45.344 }, 00:11:45.344 "peer_address": { 00:11:45.344 "trtype": "TCP", 00:11:45.344 "adrfam": "IPv4", 00:11:45.344 "traddr": "10.0.0.1", 00:11:45.344 "trsvcid": "33112" 00:11:45.344 }, 00:11:45.344 "auth": { 00:11:45.344 "state": "completed", 00:11:45.344 "digest": "sha384", 00:11:45.344 "dhgroup": "ffdhe3072" 00:11:45.344 } 00:11:45.344 } 00:11:45.344 ]' 00:11:45.344 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:45.344 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:45.344 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:45.344 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:45.344 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:45.344 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.344 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.344 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.602 12:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:11:46.538 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.538 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:46.538 12:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.538 12:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.538 12:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.538 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:46.538 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:46.538 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:46.797 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:11:46.797 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:46.797 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:46.797 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:46.797 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:46.797 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:46.797 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.797 12:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.797 12:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.797 12:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.797 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.797 12:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.054 00:11:47.054 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:47.054 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:47.054 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.625 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.625 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.625 12:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.625 12:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.625 12:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.625 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:47.625 { 00:11:47.625 "cntlid": 69, 00:11:47.625 "qid": 0, 00:11:47.625 "state": "enabled", 00:11:47.625 "thread": "nvmf_tgt_poll_group_000", 00:11:47.625 "listen_address": { 00:11:47.625 "trtype": "TCP", 00:11:47.625 "adrfam": "IPv4", 00:11:47.625 "traddr": "10.0.0.2", 00:11:47.625 "trsvcid": "4420" 00:11:47.625 }, 00:11:47.625 "peer_address": { 00:11:47.625 "trtype": "TCP", 00:11:47.625 "adrfam": "IPv4", 00:11:47.625 "traddr": "10.0.0.1", 00:11:47.625 "trsvcid": "33144" 00:11:47.625 }, 00:11:47.625 "auth": { 00:11:47.625 "state": "completed", 00:11:47.625 "digest": "sha384", 00:11:47.625 "dhgroup": "ffdhe3072" 00:11:47.625 } 00:11:47.625 } 00:11:47.625 ]' 00:11:47.625 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:47.625 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:47.625 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:47.625 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:47.625 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:47.625 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.625 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.625 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:47.883 12:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:48.817 12:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:49.382 00:11:49.382 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:49.382 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:49.382 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.639 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.639 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.639 12:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.639 12:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.639 12:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.639 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:49.639 { 00:11:49.639 "cntlid": 71, 00:11:49.639 "qid": 0, 00:11:49.639 "state": "enabled", 00:11:49.639 "thread": "nvmf_tgt_poll_group_000", 00:11:49.639 "listen_address": { 00:11:49.639 "trtype": "TCP", 00:11:49.639 "adrfam": "IPv4", 00:11:49.639 "traddr": "10.0.0.2", 00:11:49.639 "trsvcid": "4420" 00:11:49.639 }, 00:11:49.639 "peer_address": { 00:11:49.639 "trtype": "TCP", 00:11:49.639 "adrfam": "IPv4", 00:11:49.639 "traddr": "10.0.0.1", 00:11:49.639 "trsvcid": "33172" 00:11:49.639 }, 00:11:49.639 "auth": { 00:11:49.639 "state": "completed", 00:11:49.639 "digest": "sha384", 00:11:49.639 "dhgroup": "ffdhe3072" 00:11:49.639 } 00:11:49.639 } 00:11:49.639 ]' 00:11:49.639 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:49.639 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:49.639 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:49.639 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:49.639 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:49.639 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.639 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.639 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.896 12:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:50.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.875 12:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.443 00:11:51.443 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:51.443 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:51.443 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.443 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.443 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.443 12:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.443 12:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.443 12:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.444 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:51.444 { 00:11:51.444 "cntlid": 73, 00:11:51.444 "qid": 0, 00:11:51.444 "state": "enabled", 00:11:51.444 "thread": "nvmf_tgt_poll_group_000", 00:11:51.444 "listen_address": { 00:11:51.444 "trtype": "TCP", 00:11:51.444 "adrfam": "IPv4", 00:11:51.444 "traddr": "10.0.0.2", 00:11:51.444 "trsvcid": "4420" 00:11:51.444 }, 00:11:51.444 "peer_address": { 00:11:51.444 "trtype": "TCP", 00:11:51.444 "adrfam": "IPv4", 00:11:51.444 "traddr": "10.0.0.1", 00:11:51.444 "trsvcid": "33198" 00:11:51.444 }, 00:11:51.444 "auth": { 00:11:51.444 "state": "completed", 00:11:51.444 "digest": "sha384", 00:11:51.444 "dhgroup": "ffdhe4096" 00:11:51.444 } 00:11:51.444 } 00:11:51.444 ]' 00:11:51.444 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:51.703 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:51.703 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:51.703 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:51.703 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:51.703 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.703 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.703 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.962 12:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:11:52.529 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.529 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:52.529 12:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.529 12:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.529 12:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.529 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:52.530 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:52.530 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:52.788 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:11:52.788 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:52.788 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:52.788 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:52.788 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:11:52.788 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.789 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.789 12:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.789 12:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.789 12:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.789 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.789 12:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.355 00:11:53.356 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:53.356 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:53.356 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.614 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.614 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.614 12:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.614 12:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.614 12:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.614 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:53.614 { 00:11:53.614 "cntlid": 75, 00:11:53.614 "qid": 0, 00:11:53.614 "state": "enabled", 00:11:53.614 "thread": "nvmf_tgt_poll_group_000", 00:11:53.614 "listen_address": { 00:11:53.614 "trtype": "TCP", 00:11:53.614 "adrfam": "IPv4", 00:11:53.614 "traddr": "10.0.0.2", 00:11:53.614 "trsvcid": "4420" 00:11:53.614 }, 00:11:53.614 "peer_address": { 00:11:53.614 "trtype": "TCP", 00:11:53.614 "adrfam": "IPv4", 00:11:53.614 "traddr": "10.0.0.1", 00:11:53.614 "trsvcid": "33222" 00:11:53.614 }, 00:11:53.614 "auth": { 00:11:53.614 "state": "completed", 00:11:53.614 "digest": "sha384", 00:11:53.614 "dhgroup": "ffdhe4096" 00:11:53.614 } 00:11:53.614 } 00:11:53.614 ]' 00:11:53.614 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:53.614 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:53.614 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:53.614 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:53.614 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:53.614 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.614 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.614 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.873 12:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.808 12:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.809 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.809 12:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.398 00:11:55.398 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:55.398 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.398 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:55.398 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.398 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.398 12:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.398 12:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.398 12:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.399 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:55.399 { 00:11:55.399 "cntlid": 77, 00:11:55.399 "qid": 0, 00:11:55.399 "state": "enabled", 00:11:55.399 "thread": "nvmf_tgt_poll_group_000", 00:11:55.399 "listen_address": { 00:11:55.399 "trtype": "TCP", 00:11:55.399 "adrfam": "IPv4", 00:11:55.399 "traddr": "10.0.0.2", 00:11:55.399 "trsvcid": "4420" 00:11:55.399 }, 00:11:55.399 "peer_address": { 00:11:55.399 "trtype": "TCP", 00:11:55.399 "adrfam": "IPv4", 00:11:55.399 "traddr": "10.0.0.1", 00:11:55.399 "trsvcid": "38434" 00:11:55.399 }, 00:11:55.399 "auth": { 00:11:55.399 "state": "completed", 00:11:55.399 "digest": "sha384", 00:11:55.399 "dhgroup": "ffdhe4096" 00:11:55.399 } 00:11:55.399 } 00:11:55.399 ]' 00:11:55.399 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:55.656 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:55.656 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:55.656 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:55.656 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:55.656 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.656 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.656 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.915 12:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:11:56.482 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.482 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:56.482 12:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.482 12:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.482 12:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.482 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:56.482 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:56.482 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:11:57.049 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:11:57.049 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:57.049 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:57.049 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:11:57.049 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:11:57.049 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.050 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:11:57.050 12:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.050 12:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.050 12:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.050 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:57.050 12:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:11:57.306 00:11:57.306 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:57.306 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.306 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:57.566 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.566 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.566 12:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.566 12:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.566 12:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.566 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:57.566 { 00:11:57.566 "cntlid": 79, 00:11:57.566 "qid": 0, 00:11:57.566 "state": "enabled", 00:11:57.566 "thread": "nvmf_tgt_poll_group_000", 00:11:57.566 "listen_address": { 00:11:57.566 "trtype": "TCP", 00:11:57.566 "adrfam": "IPv4", 00:11:57.566 "traddr": "10.0.0.2", 00:11:57.566 "trsvcid": "4420" 00:11:57.566 }, 00:11:57.566 "peer_address": { 00:11:57.566 "trtype": "TCP", 00:11:57.566 "adrfam": "IPv4", 00:11:57.566 "traddr": "10.0.0.1", 00:11:57.566 "trsvcid": "38468" 00:11:57.566 }, 00:11:57.566 "auth": { 00:11:57.566 "state": "completed", 00:11:57.566 "digest": "sha384", 00:11:57.566 "dhgroup": "ffdhe4096" 00:11:57.566 } 00:11:57.566 } 00:11:57.566 ]' 00:11:57.566 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:57.566 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:57.566 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:57.566 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:57.566 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:57.836 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.836 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.836 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.094 12:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:11:58.661 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.662 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:11:58.662 12:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.662 12:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.662 12:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.662 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:11:58.662 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:11:58.662 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:58.662 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:58.920 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:11:58.920 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:11:58.920 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:11:58.920 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:11:58.920 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:11:58.920 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.920 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.920 12:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.920 12:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.920 12:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.920 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.920 12:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.486 00:11:59.486 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:11:59.486 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:11:59.486 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.744 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.744 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.744 12:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.744 12:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.744 12:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.744 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:11:59.744 { 00:11:59.744 "cntlid": 81, 00:11:59.744 "qid": 0, 00:11:59.744 "state": "enabled", 00:11:59.744 "thread": "nvmf_tgt_poll_group_000", 00:11:59.744 "listen_address": { 00:11:59.744 "trtype": "TCP", 00:11:59.744 "adrfam": "IPv4", 00:11:59.744 "traddr": "10.0.0.2", 00:11:59.744 "trsvcid": "4420" 00:11:59.744 }, 00:11:59.744 "peer_address": { 00:11:59.744 "trtype": "TCP", 00:11:59.744 "adrfam": "IPv4", 00:11:59.744 "traddr": "10.0.0.1", 00:11:59.744 "trsvcid": "38484" 00:11:59.744 }, 00:11:59.744 "auth": { 00:11:59.744 "state": "completed", 00:11:59.744 "digest": "sha384", 00:11:59.744 "dhgroup": "ffdhe6144" 00:11:59.744 } 00:11:59.744 } 00:11:59.744 ]' 00:11:59.744 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:11:59.744 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:59.744 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:11:59.744 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:59.744 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:11:59.744 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.745 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.745 12:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.003 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:12:00.938 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.938 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:00.938 12:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.938 12:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.938 12:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.938 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:00.938 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:00.938 12:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:01.197 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:12:01.197 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:01.197 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:01.197 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:01.197 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:01.197 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.197 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.197 12:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.197 12:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.197 12:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.197 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.197 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.457 00:12:01.457 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:01.457 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:01.457 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.023 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.023 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.023 12:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.023 12:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.023 12:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.023 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:02.023 { 00:12:02.023 "cntlid": 83, 00:12:02.023 "qid": 0, 00:12:02.023 "state": "enabled", 00:12:02.023 "thread": "nvmf_tgt_poll_group_000", 00:12:02.023 "listen_address": { 00:12:02.023 "trtype": "TCP", 00:12:02.023 "adrfam": "IPv4", 00:12:02.023 "traddr": "10.0.0.2", 00:12:02.023 "trsvcid": "4420" 00:12:02.023 }, 00:12:02.023 "peer_address": { 00:12:02.023 "trtype": "TCP", 00:12:02.023 "adrfam": "IPv4", 00:12:02.023 "traddr": "10.0.0.1", 00:12:02.023 "trsvcid": "38516" 00:12:02.023 }, 00:12:02.023 "auth": { 00:12:02.023 "state": "completed", 00:12:02.023 "digest": "sha384", 00:12:02.023 "dhgroup": "ffdhe6144" 00:12:02.023 } 00:12:02.023 } 00:12:02.023 ]' 00:12:02.023 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:02.023 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:02.023 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:02.023 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:02.023 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:02.023 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.023 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.023 12:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.280 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:12:03.215 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.215 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:03.215 12:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.215 12:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.215 12:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.215 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:03.215 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:03.215 12:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:03.215 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:12:03.215 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:03.215 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:03.215 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:03.215 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:03.215 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.215 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.215 12:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.216 12:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.216 12:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.216 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.216 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.781 00:12:03.781 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:03.781 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:03.781 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.038 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.038 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.038 12:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.038 12:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.038 12:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.038 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:04.038 { 00:12:04.038 "cntlid": 85, 00:12:04.038 "qid": 0, 00:12:04.038 "state": "enabled", 00:12:04.038 "thread": "nvmf_tgt_poll_group_000", 00:12:04.038 "listen_address": { 00:12:04.038 "trtype": "TCP", 00:12:04.038 "adrfam": "IPv4", 00:12:04.038 "traddr": "10.0.0.2", 00:12:04.038 "trsvcid": "4420" 00:12:04.038 }, 00:12:04.038 "peer_address": { 00:12:04.038 "trtype": "TCP", 00:12:04.038 "adrfam": "IPv4", 00:12:04.038 "traddr": "10.0.0.1", 00:12:04.038 "trsvcid": "37102" 00:12:04.038 }, 00:12:04.038 "auth": { 00:12:04.038 "state": "completed", 00:12:04.038 "digest": "sha384", 00:12:04.038 "dhgroup": "ffdhe6144" 00:12:04.038 } 00:12:04.038 } 00:12:04.038 ]' 00:12:04.038 12:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:04.038 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.038 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:04.038 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:04.038 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:04.299 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.299 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.299 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.555 12:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:12:05.119 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.119 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:05.119 12:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.119 12:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.119 12:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.119 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:05.119 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:05.119 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:05.412 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:12:05.412 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:05.412 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:05.412 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:05.412 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:05.412 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.412 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:12:05.412 12:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.412 12:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.412 12:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.412 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:05.412 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:05.979 00:12:05.979 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:05.979 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.979 12:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:06.236 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.236 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.236 12:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.236 12:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.236 12:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.236 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:06.236 { 00:12:06.236 "cntlid": 87, 00:12:06.236 "qid": 0, 00:12:06.236 "state": "enabled", 00:12:06.236 "thread": "nvmf_tgt_poll_group_000", 00:12:06.236 "listen_address": { 00:12:06.236 "trtype": "TCP", 00:12:06.236 "adrfam": "IPv4", 00:12:06.236 "traddr": "10.0.0.2", 00:12:06.236 "trsvcid": "4420" 00:12:06.236 }, 00:12:06.236 "peer_address": { 00:12:06.236 "trtype": "TCP", 00:12:06.236 "adrfam": "IPv4", 00:12:06.236 "traddr": "10.0.0.1", 00:12:06.236 "trsvcid": "37126" 00:12:06.236 }, 00:12:06.236 "auth": { 00:12:06.236 "state": "completed", 00:12:06.236 "digest": "sha384", 00:12:06.236 "dhgroup": "ffdhe6144" 00:12:06.236 } 00:12:06.236 } 00:12:06.236 ]' 00:12:06.236 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:06.236 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.236 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:06.236 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:06.236 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:06.494 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.494 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.494 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.751 12:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:12:07.317 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.317 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:07.317 12:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.317 12:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.317 12:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.317 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:07.317 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:07.317 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:07.317 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:07.574 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:12:07.574 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:07.574 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:07.574 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:07.574 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:07.574 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.574 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.574 12:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.574 12:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.574 12:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.574 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.574 12:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.506 00:12:08.506 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:08.506 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:08.506 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.506 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:08.506 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:08.506 12:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.506 12:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.765 12:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.765 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:08.765 { 00:12:08.765 "cntlid": 89, 00:12:08.765 "qid": 0, 00:12:08.765 "state": "enabled", 00:12:08.765 "thread": "nvmf_tgt_poll_group_000", 00:12:08.765 "listen_address": { 00:12:08.765 "trtype": "TCP", 00:12:08.765 "adrfam": "IPv4", 00:12:08.765 "traddr": "10.0.0.2", 00:12:08.765 "trsvcid": "4420" 00:12:08.765 }, 00:12:08.765 "peer_address": { 00:12:08.765 "trtype": "TCP", 00:12:08.765 "adrfam": "IPv4", 00:12:08.765 "traddr": "10.0.0.1", 00:12:08.765 "trsvcid": "37142" 00:12:08.765 }, 00:12:08.765 "auth": { 00:12:08.765 "state": "completed", 00:12:08.765 "digest": "sha384", 00:12:08.765 "dhgroup": "ffdhe8192" 00:12:08.765 } 00:12:08.765 } 00:12:08.765 ]' 00:12:08.765 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:08.765 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:08.765 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:08.765 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:08.765 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:08.765 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:08.765 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:08.765 12:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.023 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:12:09.590 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:09.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:09.848 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:09.848 12:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.848 12:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.848 12:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.848 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:09.848 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:09.848 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:10.105 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:12:10.105 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:10.105 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:10.105 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:10.105 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:10.105 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.105 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.105 12:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.105 12:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.105 12:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.105 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.105 12:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.671 00:12:10.671 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:10.671 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.671 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:10.930 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.930 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.930 12:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.930 12:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.930 12:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.930 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:10.930 { 00:12:10.930 "cntlid": 91, 00:12:10.930 "qid": 0, 00:12:10.930 "state": "enabled", 00:12:10.930 "thread": "nvmf_tgt_poll_group_000", 00:12:10.930 "listen_address": { 00:12:10.930 "trtype": "TCP", 00:12:10.930 "adrfam": "IPv4", 00:12:10.930 "traddr": "10.0.0.2", 00:12:10.930 "trsvcid": "4420" 00:12:10.930 }, 00:12:10.930 "peer_address": { 00:12:10.930 "trtype": "TCP", 00:12:10.930 "adrfam": "IPv4", 00:12:10.930 "traddr": "10.0.0.1", 00:12:10.930 "trsvcid": "37158" 00:12:10.930 }, 00:12:10.930 "auth": { 00:12:10.930 "state": "completed", 00:12:10.930 "digest": "sha384", 00:12:10.930 "dhgroup": "ffdhe8192" 00:12:10.930 } 00:12:10.930 } 00:12:10.930 ]' 00:12:10.930 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:10.930 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:10.930 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:10.930 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:10.930 12:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:11.188 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.188 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.188 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.446 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:12:12.033 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.033 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:12.033 12:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.033 12:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.033 12:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.033 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:12.033 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:12.033 12:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:12.291 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:12:12.291 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:12.291 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:12.291 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:12.291 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:12.291 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.291 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.291 12:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.291 12:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.291 12:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.291 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.291 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.857 00:12:12.857 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:12.857 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:12.857 12:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.424 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.424 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.424 12:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.424 12:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.424 12:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.424 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:13.424 { 00:12:13.424 "cntlid": 93, 00:12:13.424 "qid": 0, 00:12:13.424 "state": "enabled", 00:12:13.424 "thread": "nvmf_tgt_poll_group_000", 00:12:13.424 "listen_address": { 00:12:13.424 "trtype": "TCP", 00:12:13.424 "adrfam": "IPv4", 00:12:13.424 "traddr": "10.0.0.2", 00:12:13.424 "trsvcid": "4420" 00:12:13.424 }, 00:12:13.424 "peer_address": { 00:12:13.424 "trtype": "TCP", 00:12:13.424 "adrfam": "IPv4", 00:12:13.424 "traddr": "10.0.0.1", 00:12:13.424 "trsvcid": "37182" 00:12:13.424 }, 00:12:13.424 "auth": { 00:12:13.424 "state": "completed", 00:12:13.424 "digest": "sha384", 00:12:13.424 "dhgroup": "ffdhe8192" 00:12:13.424 } 00:12:13.424 } 00:12:13.424 ]' 00:12:13.424 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:13.424 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:13.424 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:13.424 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:13.424 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:13.424 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.424 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.424 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.683 12:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:12:14.617 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.617 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:14.617 12:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.617 12:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.617 12:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.617 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:14.617 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:14.617 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:14.874 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:12:14.874 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:14.874 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:12:14.874 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:14.874 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:14.874 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.874 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:12:14.874 12:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.874 12:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.874 12:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.874 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:14.874 12:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:15.439 00:12:15.439 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:15.439 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.439 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:15.698 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.699 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.699 12:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.699 12:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.699 12:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.699 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:15.699 { 00:12:15.699 "cntlid": 95, 00:12:15.699 "qid": 0, 00:12:15.699 "state": "enabled", 00:12:15.699 "thread": "nvmf_tgt_poll_group_000", 00:12:15.699 "listen_address": { 00:12:15.699 "trtype": "TCP", 00:12:15.699 "adrfam": "IPv4", 00:12:15.699 "traddr": "10.0.0.2", 00:12:15.699 "trsvcid": "4420" 00:12:15.699 }, 00:12:15.699 "peer_address": { 00:12:15.699 "trtype": "TCP", 00:12:15.699 "adrfam": "IPv4", 00:12:15.699 "traddr": "10.0.0.1", 00:12:15.699 "trsvcid": "60186" 00:12:15.699 }, 00:12:15.699 "auth": { 00:12:15.699 "state": "completed", 00:12:15.699 "digest": "sha384", 00:12:15.699 "dhgroup": "ffdhe8192" 00:12:15.699 } 00:12:15.699 } 00:12:15.699 ]' 00:12:15.699 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:15.699 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:15.699 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:15.699 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:15.699 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:15.699 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.699 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.699 12:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.265 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:12:16.830 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.830 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:16.830 12:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.830 12:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.830 12:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.830 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:12:16.830 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:16.830 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:16.830 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:16.830 12:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:17.087 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:12:17.087 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:17.087 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:17.087 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:17.087 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:17.087 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.087 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.087 12:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.087 12:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.087 12:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.087 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.087 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.344 00:12:17.602 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:17.602 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:17.602 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.602 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.602 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.602 12:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.602 12:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.860 12:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.860 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:17.860 { 00:12:17.860 "cntlid": 97, 00:12:17.860 "qid": 0, 00:12:17.860 "state": "enabled", 00:12:17.860 "thread": "nvmf_tgt_poll_group_000", 00:12:17.860 "listen_address": { 00:12:17.860 "trtype": "TCP", 00:12:17.860 "adrfam": "IPv4", 00:12:17.860 "traddr": "10.0.0.2", 00:12:17.860 "trsvcid": "4420" 00:12:17.860 }, 00:12:17.860 "peer_address": { 00:12:17.860 "trtype": "TCP", 00:12:17.860 "adrfam": "IPv4", 00:12:17.860 "traddr": "10.0.0.1", 00:12:17.860 "trsvcid": "60208" 00:12:17.860 }, 00:12:17.860 "auth": { 00:12:17.860 "state": "completed", 00:12:17.860 "digest": "sha512", 00:12:17.860 "dhgroup": "null" 00:12:17.860 } 00:12:17.860 } 00:12:17.860 ]' 00:12:17.860 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:17.860 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:17.860 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:17.860 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:17.860 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:17.860 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.860 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.860 12:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.118 12:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:12:19.050 12:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.050 12:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:19.050 12:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.050 12:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.050 12:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.050 12:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:19.050 12:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:19.050 12:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:19.050 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:12:19.050 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:19.050 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:19.050 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:19.050 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:19.050 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.050 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.050 12:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.050 12:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.050 12:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.050 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.050 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.616 00:12:19.616 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:19.616 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:19.616 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.616 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.616 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.616 12:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:19.616 12:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.616 12:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:19.616 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:19.616 { 00:12:19.616 "cntlid": 99, 00:12:19.616 "qid": 0, 00:12:19.616 "state": "enabled", 00:12:19.616 "thread": "nvmf_tgt_poll_group_000", 00:12:19.616 "listen_address": { 00:12:19.616 "trtype": "TCP", 00:12:19.616 "adrfam": "IPv4", 00:12:19.616 "traddr": "10.0.0.2", 00:12:19.616 "trsvcid": "4420" 00:12:19.616 }, 00:12:19.616 "peer_address": { 00:12:19.616 "trtype": "TCP", 00:12:19.616 "adrfam": "IPv4", 00:12:19.616 "traddr": "10.0.0.1", 00:12:19.616 "trsvcid": "60234" 00:12:19.616 }, 00:12:19.616 "auth": { 00:12:19.616 "state": "completed", 00:12:19.616 "digest": "sha512", 00:12:19.616 "dhgroup": "null" 00:12:19.616 } 00:12:19.616 } 00:12:19.616 ]' 00:12:19.616 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:19.875 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:19.875 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:19.875 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:19.875 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:19.875 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.875 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.875 12:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.133 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:12:20.710 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.710 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:20.710 12:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.710 12:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.710 12:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.710 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:20.710 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:20.710 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:20.982 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:12:20.982 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:20.982 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:20.982 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:20.982 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:20.982 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.982 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.982 12:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.982 12:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.982 12:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.982 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.982 12:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.240 00:12:21.498 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:21.498 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:21.498 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.498 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.498 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.498 12:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:21.498 12:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.498 12:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:21.498 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:21.498 { 00:12:21.498 "cntlid": 101, 00:12:21.498 "qid": 0, 00:12:21.498 "state": "enabled", 00:12:21.498 "thread": "nvmf_tgt_poll_group_000", 00:12:21.498 "listen_address": { 00:12:21.498 "trtype": "TCP", 00:12:21.498 "adrfam": "IPv4", 00:12:21.498 "traddr": "10.0.0.2", 00:12:21.498 "trsvcid": "4420" 00:12:21.498 }, 00:12:21.498 "peer_address": { 00:12:21.498 "trtype": "TCP", 00:12:21.498 "adrfam": "IPv4", 00:12:21.498 "traddr": "10.0.0.1", 00:12:21.498 "trsvcid": "60266" 00:12:21.498 }, 00:12:21.498 "auth": { 00:12:21.498 "state": "completed", 00:12:21.498 "digest": "sha512", 00:12:21.498 "dhgroup": "null" 00:12:21.498 } 00:12:21.498 } 00:12:21.498 ]' 00:12:21.498 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:21.757 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:21.757 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:21.757 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:21.757 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:21.757 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.757 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.757 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.015 12:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:12:22.580 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.580 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:22.580 12:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.580 12:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.580 12:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.580 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:22.580 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:22.580 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:22.837 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:12:22.838 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:22.838 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:22.838 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:12:22.838 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:22.838 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.838 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:12:22.838 12:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.838 12:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.838 12:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.838 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:22.838 12:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:23.095 00:12:23.353 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:23.353 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:23.353 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.611 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.611 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.611 12:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.611 12:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.611 12:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.611 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:23.611 { 00:12:23.611 "cntlid": 103, 00:12:23.611 "qid": 0, 00:12:23.611 "state": "enabled", 00:12:23.611 "thread": "nvmf_tgt_poll_group_000", 00:12:23.611 "listen_address": { 00:12:23.611 "trtype": "TCP", 00:12:23.611 "adrfam": "IPv4", 00:12:23.611 "traddr": "10.0.0.2", 00:12:23.611 "trsvcid": "4420" 00:12:23.611 }, 00:12:23.611 "peer_address": { 00:12:23.611 "trtype": "TCP", 00:12:23.611 "adrfam": "IPv4", 00:12:23.611 "traddr": "10.0.0.1", 00:12:23.611 "trsvcid": "60292" 00:12:23.611 }, 00:12:23.611 "auth": { 00:12:23.611 "state": "completed", 00:12:23.611 "digest": "sha512", 00:12:23.611 "dhgroup": "null" 00:12:23.611 } 00:12:23.611 } 00:12:23.611 ]' 00:12:23.611 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:23.611 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:23.611 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:23.611 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:12:23.611 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:23.611 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.611 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.611 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.870 12:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:12:24.513 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.513 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:24.513 12:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.513 12:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.513 12:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.513 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:24.513 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:24.513 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:24.513 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:25.080 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:12:25.080 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:25.080 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:25.080 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:25.080 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:25.080 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.080 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.080 12:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.080 12:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.080 12:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.080 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.080 12:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.346 00:12:25.347 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:25.347 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:25.347 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.606 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.606 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.606 12:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.606 12:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.606 12:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.606 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:25.606 { 00:12:25.606 "cntlid": 105, 00:12:25.606 "qid": 0, 00:12:25.606 "state": "enabled", 00:12:25.606 "thread": "nvmf_tgt_poll_group_000", 00:12:25.606 "listen_address": { 00:12:25.606 "trtype": "TCP", 00:12:25.606 "adrfam": "IPv4", 00:12:25.606 "traddr": "10.0.0.2", 00:12:25.606 "trsvcid": "4420" 00:12:25.606 }, 00:12:25.606 "peer_address": { 00:12:25.606 "trtype": "TCP", 00:12:25.606 "adrfam": "IPv4", 00:12:25.606 "traddr": "10.0.0.1", 00:12:25.606 "trsvcid": "38050" 00:12:25.606 }, 00:12:25.606 "auth": { 00:12:25.606 "state": "completed", 00:12:25.606 "digest": "sha512", 00:12:25.606 "dhgroup": "ffdhe2048" 00:12:25.606 } 00:12:25.606 } 00:12:25.606 ]' 00:12:25.606 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:25.606 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:25.606 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:25.606 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:25.606 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:25.863 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.863 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.863 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.121 12:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:12:26.688 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.688 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:26.688 12:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.688 12:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.688 12:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.688 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:26.688 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:26.688 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:26.945 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:12:26.945 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:26.945 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:26.945 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:26.945 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:26.945 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.945 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.945 12:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.945 12:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.945 12:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.945 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.945 12:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.203 00:12:27.461 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:27.461 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:27.461 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.720 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.720 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.720 12:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.720 12:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.720 12:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.720 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:27.720 { 00:12:27.721 "cntlid": 107, 00:12:27.721 "qid": 0, 00:12:27.721 "state": "enabled", 00:12:27.721 "thread": "nvmf_tgt_poll_group_000", 00:12:27.721 "listen_address": { 00:12:27.721 "trtype": "TCP", 00:12:27.721 "adrfam": "IPv4", 00:12:27.721 "traddr": "10.0.0.2", 00:12:27.721 "trsvcid": "4420" 00:12:27.721 }, 00:12:27.721 "peer_address": { 00:12:27.721 "trtype": "TCP", 00:12:27.721 "adrfam": "IPv4", 00:12:27.721 "traddr": "10.0.0.1", 00:12:27.721 "trsvcid": "38074" 00:12:27.721 }, 00:12:27.721 "auth": { 00:12:27.721 "state": "completed", 00:12:27.721 "digest": "sha512", 00:12:27.721 "dhgroup": "ffdhe2048" 00:12:27.721 } 00:12:27.721 } 00:12:27.721 ]' 00:12:27.721 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:27.721 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:27.721 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:27.721 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:27.721 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:27.721 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.721 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.721 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.979 12:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.912 12:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.477 00:12:29.477 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:29.477 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.477 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:29.735 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.735 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.735 12:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.735 12:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.735 12:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:29.735 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:29.735 { 00:12:29.735 "cntlid": 109, 00:12:29.735 "qid": 0, 00:12:29.735 "state": "enabled", 00:12:29.735 "thread": "nvmf_tgt_poll_group_000", 00:12:29.735 "listen_address": { 00:12:29.735 "trtype": "TCP", 00:12:29.735 "adrfam": "IPv4", 00:12:29.735 "traddr": "10.0.0.2", 00:12:29.735 "trsvcid": "4420" 00:12:29.735 }, 00:12:29.735 "peer_address": { 00:12:29.735 "trtype": "TCP", 00:12:29.735 "adrfam": "IPv4", 00:12:29.735 "traddr": "10.0.0.1", 00:12:29.735 "trsvcid": "38112" 00:12:29.735 }, 00:12:29.735 "auth": { 00:12:29.735 "state": "completed", 00:12:29.735 "digest": "sha512", 00:12:29.735 "dhgroup": "ffdhe2048" 00:12:29.735 } 00:12:29.735 } 00:12:29.735 ]' 00:12:29.735 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:29.735 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:29.735 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:29.735 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:29.735 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:29.735 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.735 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.735 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.993 12:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:12:30.558 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.558 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:30.558 12:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.558 12:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.558 12:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.558 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:30.558 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:30.558 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:30.816 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:12:30.816 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:30.816 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:30.816 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:12:30.816 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:30.816 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.816 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:12:30.816 12:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.816 12:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.816 12:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.816 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:30.816 12:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:31.413 00:12:31.413 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:31.413 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.413 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:31.413 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.413 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.413 12:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.414 12:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.670 12:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.670 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:31.670 { 00:12:31.670 "cntlid": 111, 00:12:31.670 "qid": 0, 00:12:31.670 "state": "enabled", 00:12:31.670 "thread": "nvmf_tgt_poll_group_000", 00:12:31.670 "listen_address": { 00:12:31.670 "trtype": "TCP", 00:12:31.670 "adrfam": "IPv4", 00:12:31.670 "traddr": "10.0.0.2", 00:12:31.670 "trsvcid": "4420" 00:12:31.670 }, 00:12:31.670 "peer_address": { 00:12:31.670 "trtype": "TCP", 00:12:31.671 "adrfam": "IPv4", 00:12:31.671 "traddr": "10.0.0.1", 00:12:31.671 "trsvcid": "38132" 00:12:31.671 }, 00:12:31.671 "auth": { 00:12:31.671 "state": "completed", 00:12:31.671 "digest": "sha512", 00:12:31.671 "dhgroup": "ffdhe2048" 00:12:31.671 } 00:12:31.671 } 00:12:31.671 ]' 00:12:31.671 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:31.671 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:31.671 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:31.671 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:31.671 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:31.671 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.671 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.671 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.929 12:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:12:32.863 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.863 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:32.863 12:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.863 12:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.863 12:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.863 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:32.863 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:32.863 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:32.863 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:33.121 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:12:33.121 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:33.121 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:33.121 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:33.121 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:33.121 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.121 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.121 12:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.121 12:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.121 12:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.121 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.121 12:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:33.379 00:12:33.379 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:33.379 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:33.379 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.637 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.637 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.637 12:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.637 12:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.637 12:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.637 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:33.637 { 00:12:33.637 "cntlid": 113, 00:12:33.637 "qid": 0, 00:12:33.637 "state": "enabled", 00:12:33.637 "thread": "nvmf_tgt_poll_group_000", 00:12:33.637 "listen_address": { 00:12:33.637 "trtype": "TCP", 00:12:33.637 "adrfam": "IPv4", 00:12:33.637 "traddr": "10.0.0.2", 00:12:33.637 "trsvcid": "4420" 00:12:33.637 }, 00:12:33.637 "peer_address": { 00:12:33.637 "trtype": "TCP", 00:12:33.637 "adrfam": "IPv4", 00:12:33.637 "traddr": "10.0.0.1", 00:12:33.637 "trsvcid": "38170" 00:12:33.637 }, 00:12:33.637 "auth": { 00:12:33.637 "state": "completed", 00:12:33.637 "digest": "sha512", 00:12:33.637 "dhgroup": "ffdhe3072" 00:12:33.637 } 00:12:33.637 } 00:12:33.637 ]' 00:12:33.637 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:33.637 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:33.894 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:33.894 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:33.894 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:33.894 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.894 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.894 12:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.152 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:12:34.719 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.719 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:34.719 12:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.719 12:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.719 12:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.719 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:34.719 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:34.719 12:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:34.977 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:12:34.977 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:34.977 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:34.977 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:34.977 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:34.977 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.977 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.977 12:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:34.977 12:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.977 12:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:34.977 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.977 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.542 00:12:35.542 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:35.542 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:35.542 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.799 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.799 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.799 12:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.799 12:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.799 12:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.799 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:35.799 { 00:12:35.799 "cntlid": 115, 00:12:35.799 "qid": 0, 00:12:35.799 "state": "enabled", 00:12:35.799 "thread": "nvmf_tgt_poll_group_000", 00:12:35.799 "listen_address": { 00:12:35.799 "trtype": "TCP", 00:12:35.799 "adrfam": "IPv4", 00:12:35.799 "traddr": "10.0.0.2", 00:12:35.799 "trsvcid": "4420" 00:12:35.799 }, 00:12:35.799 "peer_address": { 00:12:35.799 "trtype": "TCP", 00:12:35.799 "adrfam": "IPv4", 00:12:35.799 "traddr": "10.0.0.1", 00:12:35.799 "trsvcid": "57158" 00:12:35.799 }, 00:12:35.799 "auth": { 00:12:35.799 "state": "completed", 00:12:35.799 "digest": "sha512", 00:12:35.799 "dhgroup": "ffdhe3072" 00:12:35.799 } 00:12:35.799 } 00:12:35.799 ]' 00:12:35.799 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:35.799 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:35.799 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:35.799 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:35.799 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:35.799 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.799 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.799 12:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.057 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.991 12:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.249 00:12:37.249 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:37.249 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.249 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:37.814 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.815 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.815 12:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:37.815 12:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.815 12:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:37.815 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:37.815 { 00:12:37.815 "cntlid": 117, 00:12:37.815 "qid": 0, 00:12:37.815 "state": "enabled", 00:12:37.815 "thread": "nvmf_tgt_poll_group_000", 00:12:37.815 "listen_address": { 00:12:37.815 "trtype": "TCP", 00:12:37.815 "adrfam": "IPv4", 00:12:37.815 "traddr": "10.0.0.2", 00:12:37.815 "trsvcid": "4420" 00:12:37.815 }, 00:12:37.815 "peer_address": { 00:12:37.815 "trtype": "TCP", 00:12:37.815 "adrfam": "IPv4", 00:12:37.815 "traddr": "10.0.0.1", 00:12:37.815 "trsvcid": "57184" 00:12:37.815 }, 00:12:37.815 "auth": { 00:12:37.815 "state": "completed", 00:12:37.815 "digest": "sha512", 00:12:37.815 "dhgroup": "ffdhe3072" 00:12:37.815 } 00:12:37.815 } 00:12:37.815 ]' 00:12:37.815 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:37.815 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:37.815 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:37.815 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:37.815 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:37.815 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.815 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.815 12:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.074 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:12:38.701 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.701 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:38.701 12:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.701 12:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.959 12:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.959 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:38.959 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:38.959 12:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:12:39.217 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:12:39.217 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:39.217 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:39.217 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:12:39.217 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:39.217 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.217 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:12:39.217 12:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.217 12:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.217 12:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.217 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:39.217 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:39.475 00:12:39.475 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:39.475 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:39.475 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.734 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.734 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.734 12:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.734 12:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.734 12:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.734 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:39.734 { 00:12:39.734 "cntlid": 119, 00:12:39.734 "qid": 0, 00:12:39.734 "state": "enabled", 00:12:39.734 "thread": "nvmf_tgt_poll_group_000", 00:12:39.734 "listen_address": { 00:12:39.734 "trtype": "TCP", 00:12:39.734 "adrfam": "IPv4", 00:12:39.734 "traddr": "10.0.0.2", 00:12:39.734 "trsvcid": "4420" 00:12:39.734 }, 00:12:39.734 "peer_address": { 00:12:39.734 "trtype": "TCP", 00:12:39.734 "adrfam": "IPv4", 00:12:39.734 "traddr": "10.0.0.1", 00:12:39.734 "trsvcid": "57216" 00:12:39.734 }, 00:12:39.734 "auth": { 00:12:39.734 "state": "completed", 00:12:39.734 "digest": "sha512", 00:12:39.734 "dhgroup": "ffdhe3072" 00:12:39.734 } 00:12:39.734 } 00:12:39.734 ]' 00:12:39.734 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:39.734 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:39.734 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:39.734 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:39.734 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:39.992 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.992 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.992 12:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.250 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:12:40.816 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.816 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:40.816 12:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.816 12:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.816 12:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:40.816 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:40.816 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:40.816 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:40.816 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:41.074 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:12:41.074 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:41.074 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:41.074 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:41.074 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:41.074 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.074 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.074 12:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.074 12:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.074 12:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.074 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.074 12:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.333 00:12:41.333 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:41.333 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:41.333 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.591 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.591 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.591 12:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.591 12:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.591 12:37:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.591 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:41.591 { 00:12:41.591 "cntlid": 121, 00:12:41.591 "qid": 0, 00:12:41.591 "state": "enabled", 00:12:41.591 "thread": "nvmf_tgt_poll_group_000", 00:12:41.591 "listen_address": { 00:12:41.591 "trtype": "TCP", 00:12:41.591 "adrfam": "IPv4", 00:12:41.591 "traddr": "10.0.0.2", 00:12:41.591 "trsvcid": "4420" 00:12:41.591 }, 00:12:41.591 "peer_address": { 00:12:41.591 "trtype": "TCP", 00:12:41.591 "adrfam": "IPv4", 00:12:41.591 "traddr": "10.0.0.1", 00:12:41.591 "trsvcid": "57252" 00:12:41.591 }, 00:12:41.591 "auth": { 00:12:41.591 "state": "completed", 00:12:41.591 "digest": "sha512", 00:12:41.591 "dhgroup": "ffdhe4096" 00:12:41.591 } 00:12:41.591 } 00:12:41.591 ]' 00:12:41.591 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:41.849 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:41.849 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:41.849 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:41.849 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:41.849 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.849 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.849 12:37:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.107 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:12:42.690 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.690 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:42.690 12:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.690 12:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.690 12:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.690 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:42.690 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:42.690 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:42.948 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:12:42.948 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:42.948 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:42.948 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:42.948 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:42.948 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.948 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.948 12:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.948 12:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.948 12:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.948 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.948 12:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.513 00:12:43.513 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:43.513 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:43.513 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.771 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.771 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.771 12:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.771 12:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.771 12:37:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.771 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:43.771 { 00:12:43.771 "cntlid": 123, 00:12:43.771 "qid": 0, 00:12:43.771 "state": "enabled", 00:12:43.771 "thread": "nvmf_tgt_poll_group_000", 00:12:43.771 "listen_address": { 00:12:43.771 "trtype": "TCP", 00:12:43.771 "adrfam": "IPv4", 00:12:43.771 "traddr": "10.0.0.2", 00:12:43.771 "trsvcid": "4420" 00:12:43.771 }, 00:12:43.771 "peer_address": { 00:12:43.771 "trtype": "TCP", 00:12:43.771 "adrfam": "IPv4", 00:12:43.771 "traddr": "10.0.0.1", 00:12:43.771 "trsvcid": "46546" 00:12:43.771 }, 00:12:43.771 "auth": { 00:12:43.771 "state": "completed", 00:12:43.771 "digest": "sha512", 00:12:43.771 "dhgroup": "ffdhe4096" 00:12:43.771 } 00:12:43.771 } 00:12:43.771 ]' 00:12:43.771 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:43.771 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:43.771 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:44.029 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:44.029 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:44.029 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.029 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.029 12:37:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.288 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:12:44.855 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.855 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:44.855 12:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.855 12:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.855 12:37:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.855 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:44.855 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:44.855 12:37:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:45.111 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:12:45.111 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:45.111 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:45.111 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:45.111 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:45.111 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.111 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.111 12:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.111 12:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.111 12:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.111 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.111 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.731 00:12:45.731 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:45.731 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:45.731 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.731 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.731 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.731 12:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:45.731 12:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.731 12:37:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:45.731 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:45.731 { 00:12:45.731 "cntlid": 125, 00:12:45.731 "qid": 0, 00:12:45.731 "state": "enabled", 00:12:45.731 "thread": "nvmf_tgt_poll_group_000", 00:12:45.731 "listen_address": { 00:12:45.731 "trtype": "TCP", 00:12:45.731 "adrfam": "IPv4", 00:12:45.731 "traddr": "10.0.0.2", 00:12:45.731 "trsvcid": "4420" 00:12:45.731 }, 00:12:45.731 "peer_address": { 00:12:45.731 "trtype": "TCP", 00:12:45.731 "adrfam": "IPv4", 00:12:45.731 "traddr": "10.0.0.1", 00:12:45.731 "trsvcid": "46562" 00:12:45.731 }, 00:12:45.731 "auth": { 00:12:45.731 "state": "completed", 00:12:45.731 "digest": "sha512", 00:12:45.731 "dhgroup": "ffdhe4096" 00:12:45.731 } 00:12:45.731 } 00:12:45.731 ]' 00:12:45.731 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:45.989 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:45.989 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:45.989 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:45.989 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:45.989 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.989 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.989 12:37:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.246 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:12:46.811 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.811 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:46.811 12:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.811 12:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.811 12:37:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.811 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:46.811 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:46.811 12:37:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:12:47.383 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:12:47.383 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:47.383 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:47.384 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:12:47.384 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:47.384 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.384 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:12:47.384 12:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.384 12:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.384 12:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.384 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:47.384 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:47.641 00:12:47.641 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:47.641 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:47.641 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.899 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.899 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.899 12:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.899 12:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.899 12:37:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.899 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:47.899 { 00:12:47.899 "cntlid": 127, 00:12:47.899 "qid": 0, 00:12:47.899 "state": "enabled", 00:12:47.899 "thread": "nvmf_tgt_poll_group_000", 00:12:47.899 "listen_address": { 00:12:47.899 "trtype": "TCP", 00:12:47.899 "adrfam": "IPv4", 00:12:47.899 "traddr": "10.0.0.2", 00:12:47.899 "trsvcid": "4420" 00:12:47.899 }, 00:12:47.899 "peer_address": { 00:12:47.899 "trtype": "TCP", 00:12:47.899 "adrfam": "IPv4", 00:12:47.899 "traddr": "10.0.0.1", 00:12:47.899 "trsvcid": "46578" 00:12:47.899 }, 00:12:47.899 "auth": { 00:12:47.899 "state": "completed", 00:12:47.899 "digest": "sha512", 00:12:47.899 "dhgroup": "ffdhe4096" 00:12:47.899 } 00:12:47.899 } 00:12:47.899 ]' 00:12:47.899 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:47.899 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:47.899 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:47.899 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:47.899 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:47.899 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.899 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.899 12:37:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.464 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:12:49.028 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.028 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:49.028 12:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.028 12:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.028 12:37:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.028 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:49.028 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:49.028 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:49.028 12:37:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:49.286 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:12:49.286 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:49.286 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:49.286 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:49.286 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:49.286 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.286 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.286 12:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.286 12:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.286 12:37:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.286 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.286 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.849 00:12:49.849 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:49.849 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:49.849 12:37:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.107 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.107 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.107 12:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.107 12:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.107 12:37:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.107 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:50.107 { 00:12:50.107 "cntlid": 129, 00:12:50.107 "qid": 0, 00:12:50.107 "state": "enabled", 00:12:50.107 "thread": "nvmf_tgt_poll_group_000", 00:12:50.107 "listen_address": { 00:12:50.107 "trtype": "TCP", 00:12:50.107 "adrfam": "IPv4", 00:12:50.107 "traddr": "10.0.0.2", 00:12:50.107 "trsvcid": "4420" 00:12:50.107 }, 00:12:50.107 "peer_address": { 00:12:50.107 "trtype": "TCP", 00:12:50.107 "adrfam": "IPv4", 00:12:50.107 "traddr": "10.0.0.1", 00:12:50.107 "trsvcid": "46596" 00:12:50.107 }, 00:12:50.107 "auth": { 00:12:50.107 "state": "completed", 00:12:50.107 "digest": "sha512", 00:12:50.107 "dhgroup": "ffdhe6144" 00:12:50.107 } 00:12:50.107 } 00:12:50.107 ]' 00:12:50.107 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:50.107 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:50.107 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:50.107 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:50.107 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:50.364 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.364 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.364 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.621 12:37:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:12:51.187 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.187 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:51.187 12:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.187 12:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.187 12:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.187 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:51.187 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:51.187 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:51.445 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:12:51.445 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:51.445 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:51.445 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:51.445 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:12:51.445 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.445 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.445 12:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:51.445 12:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.445 12:37:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:51.445 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.445 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.009 00:12:52.010 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:52.010 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:52.010 12:37:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.266 12:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.266 12:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.266 12:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.266 12:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.266 12:37:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.266 12:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:52.266 { 00:12:52.266 "cntlid": 131, 00:12:52.266 "qid": 0, 00:12:52.266 "state": "enabled", 00:12:52.266 "thread": "nvmf_tgt_poll_group_000", 00:12:52.266 "listen_address": { 00:12:52.266 "trtype": "TCP", 00:12:52.266 "adrfam": "IPv4", 00:12:52.266 "traddr": "10.0.0.2", 00:12:52.266 "trsvcid": "4420" 00:12:52.266 }, 00:12:52.266 "peer_address": { 00:12:52.266 "trtype": "TCP", 00:12:52.266 "adrfam": "IPv4", 00:12:52.266 "traddr": "10.0.0.1", 00:12:52.266 "trsvcid": "46638" 00:12:52.266 }, 00:12:52.266 "auth": { 00:12:52.266 "state": "completed", 00:12:52.266 "digest": "sha512", 00:12:52.266 "dhgroup": "ffdhe6144" 00:12:52.266 } 00:12:52.266 } 00:12:52.266 ]' 00:12:52.266 12:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:52.266 12:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:52.266 12:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:52.266 12:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:52.266 12:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:52.266 12:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.266 12:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.266 12:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.523 12:37:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.479 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.044 00:12:54.044 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:54.044 12:37:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:54.044 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.301 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.301 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.301 12:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:54.301 12:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.301 12:37:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:54.301 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:54.301 { 00:12:54.301 "cntlid": 133, 00:12:54.301 "qid": 0, 00:12:54.301 "state": "enabled", 00:12:54.301 "thread": "nvmf_tgt_poll_group_000", 00:12:54.301 "listen_address": { 00:12:54.301 "trtype": "TCP", 00:12:54.301 "adrfam": "IPv4", 00:12:54.301 "traddr": "10.0.0.2", 00:12:54.301 "trsvcid": "4420" 00:12:54.301 }, 00:12:54.301 "peer_address": { 00:12:54.301 "trtype": "TCP", 00:12:54.301 "adrfam": "IPv4", 00:12:54.301 "traddr": "10.0.0.1", 00:12:54.301 "trsvcid": "59182" 00:12:54.301 }, 00:12:54.301 "auth": { 00:12:54.301 "state": "completed", 00:12:54.301 "digest": "sha512", 00:12:54.301 "dhgroup": "ffdhe6144" 00:12:54.301 } 00:12:54.301 } 00:12:54.301 ]' 00:12:54.301 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:54.301 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:54.301 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:54.559 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:54.559 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:54.559 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.559 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.559 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.817 12:37:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:12:55.382 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.382 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:55.382 12:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.382 12:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.382 12:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.382 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:55.382 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:55.382 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:12:55.640 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:12:55.640 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:55.640 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:55.640 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:12:55.640 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:12:55.640 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.640 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:12:55.640 12:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:55.640 12:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.640 12:37:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:55.640 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:55.640 12:37:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:12:56.205 00:12:56.205 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:56.205 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.205 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:56.463 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.463 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.463 12:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:56.463 12:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.463 12:37:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:56.463 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:56.463 { 00:12:56.463 "cntlid": 135, 00:12:56.463 "qid": 0, 00:12:56.463 "state": "enabled", 00:12:56.463 "thread": "nvmf_tgt_poll_group_000", 00:12:56.463 "listen_address": { 00:12:56.463 "trtype": "TCP", 00:12:56.463 "adrfam": "IPv4", 00:12:56.463 "traddr": "10.0.0.2", 00:12:56.463 "trsvcid": "4420" 00:12:56.463 }, 00:12:56.463 "peer_address": { 00:12:56.463 "trtype": "TCP", 00:12:56.463 "adrfam": "IPv4", 00:12:56.463 "traddr": "10.0.0.1", 00:12:56.463 "trsvcid": "59202" 00:12:56.463 }, 00:12:56.463 "auth": { 00:12:56.463 "state": "completed", 00:12:56.463 "digest": "sha512", 00:12:56.463 "dhgroup": "ffdhe6144" 00:12:56.463 } 00:12:56.463 } 00:12:56.463 ]' 00:12:56.463 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:56.463 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:56.463 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:56.463 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:56.463 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:56.720 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.721 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.721 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.070 12:37:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:12:57.637 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.637 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:57.637 12:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.637 12:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.637 12:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.637 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:12:57.637 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:57.637 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:57.637 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:57.895 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:12:57.895 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:12:57.895 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:12:57.895 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:12:57.895 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:12:57.895 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.895 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.895 12:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.895 12:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.895 12:37:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.895 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.895 12:37:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.459 00:12:58.459 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:12:58.459 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:12:58.459 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.716 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.716 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.716 12:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.716 12:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.716 12:37:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.716 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:12:58.716 { 00:12:58.716 "cntlid": 137, 00:12:58.716 "qid": 0, 00:12:58.716 "state": "enabled", 00:12:58.716 "thread": "nvmf_tgt_poll_group_000", 00:12:58.716 "listen_address": { 00:12:58.716 "trtype": "TCP", 00:12:58.716 "adrfam": "IPv4", 00:12:58.716 "traddr": "10.0.0.2", 00:12:58.716 "trsvcid": "4420" 00:12:58.716 }, 00:12:58.716 "peer_address": { 00:12:58.716 "trtype": "TCP", 00:12:58.716 "adrfam": "IPv4", 00:12:58.716 "traddr": "10.0.0.1", 00:12:58.716 "trsvcid": "59224" 00:12:58.716 }, 00:12:58.716 "auth": { 00:12:58.716 "state": "completed", 00:12:58.716 "digest": "sha512", 00:12:58.716 "dhgroup": "ffdhe8192" 00:12:58.716 } 00:12:58.716 } 00:12:58.716 ]' 00:12:58.716 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:12:58.716 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.716 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:12:58.716 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:58.716 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:12:58.973 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.973 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.973 12:37:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.231 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:12:59.796 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.796 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:12:59.796 12:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:59.796 12:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.796 12:37:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:59.796 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:12:59.796 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:59.796 12:37:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:00.053 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:13:00.053 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:00.053 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:00.053 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:00.053 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:00.053 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.053 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.053 12:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.053 12:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.053 12:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.053 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.053 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.640 00:13:00.640 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:00.640 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:00.640 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.900 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.900 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.900 12:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:00.900 12:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.900 12:37:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:00.900 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:00.900 { 00:13:00.900 "cntlid": 139, 00:13:00.900 "qid": 0, 00:13:00.900 "state": "enabled", 00:13:00.900 "thread": "nvmf_tgt_poll_group_000", 00:13:00.900 "listen_address": { 00:13:00.900 "trtype": "TCP", 00:13:00.900 "adrfam": "IPv4", 00:13:00.900 "traddr": "10.0.0.2", 00:13:00.900 "trsvcid": "4420" 00:13:00.900 }, 00:13:00.900 "peer_address": { 00:13:00.900 "trtype": "TCP", 00:13:00.900 "adrfam": "IPv4", 00:13:00.900 "traddr": "10.0.0.1", 00:13:00.900 "trsvcid": "59246" 00:13:00.900 }, 00:13:00.900 "auth": { 00:13:00.900 "state": "completed", 00:13:00.900 "digest": "sha512", 00:13:00.900 "dhgroup": "ffdhe8192" 00:13:00.900 } 00:13:00.900 } 00:13:00.900 ]' 00:13:00.900 12:37:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:01.158 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.158 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:01.158 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:01.158 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:01.158 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.158 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.158 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.415 12:37:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:01:ZDFjMDIxNzhjMjcwOTk2YmI0NzBmN2Y0NTliYWI4YjkvSJaJ: --dhchap-ctrl-secret DHHC-1:02:YWM4YTZmMWU1NTFkMmQ0N2NjYTRjMjI0ODc5N2Q4MDYwYmI1MTE4YjkyOWNjNjMwjf/aOg==: 00:13:02.350 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.350 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:13:02.350 12:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.350 12:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.350 12:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.350 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:02.350 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:02.350 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:02.608 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:13:02.608 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:02.608 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:02.608 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:02.608 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:02.608 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.608 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:02.608 12:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.608 12:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.608 12:37:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.608 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:02.608 12:37:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:03.173 00:13:03.173 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:03.173 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.173 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:03.431 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.431 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.431 12:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.431 12:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.431 12:37:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.431 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:03.431 { 00:13:03.431 "cntlid": 141, 00:13:03.431 "qid": 0, 00:13:03.431 "state": "enabled", 00:13:03.431 "thread": "nvmf_tgt_poll_group_000", 00:13:03.431 "listen_address": { 00:13:03.431 "trtype": "TCP", 00:13:03.431 "adrfam": "IPv4", 00:13:03.431 "traddr": "10.0.0.2", 00:13:03.431 "trsvcid": "4420" 00:13:03.431 }, 00:13:03.431 "peer_address": { 00:13:03.431 "trtype": "TCP", 00:13:03.431 "adrfam": "IPv4", 00:13:03.431 "traddr": "10.0.0.1", 00:13:03.431 "trsvcid": "59286" 00:13:03.431 }, 00:13:03.431 "auth": { 00:13:03.431 "state": "completed", 00:13:03.431 "digest": "sha512", 00:13:03.431 "dhgroup": "ffdhe8192" 00:13:03.431 } 00:13:03.431 } 00:13:03.431 ]' 00:13:03.431 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:03.690 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:03.690 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:03.690 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:03.690 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:03.690 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.690 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.690 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.948 12:37:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:02:YTgwOTU2ZGMyMjY4NTBhYzBhZGM1MTI1OTJhNDBiOTc1MzViNDdkYjhkNmY0M2EwVb/DEw==: --dhchap-ctrl-secret DHHC-1:01:ZWFmOGJiOTllMDUwODc2MWU4M2Q4NTg4OTUyZDE5MjBPakZ8: 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.882 12:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.140 12:37:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.140 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:05.140 12:37:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:05.707 00:13:05.707 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:05.707 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.707 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:05.976 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.976 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.976 12:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:05.976 12:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.976 12:37:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:05.976 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:05.976 { 00:13:05.976 "cntlid": 143, 00:13:05.976 "qid": 0, 00:13:05.976 "state": "enabled", 00:13:05.976 "thread": "nvmf_tgt_poll_group_000", 00:13:05.976 "listen_address": { 00:13:05.976 "trtype": "TCP", 00:13:05.976 "adrfam": "IPv4", 00:13:05.976 "traddr": "10.0.0.2", 00:13:05.976 "trsvcid": "4420" 00:13:05.976 }, 00:13:05.976 "peer_address": { 00:13:05.976 "trtype": "TCP", 00:13:05.976 "adrfam": "IPv4", 00:13:05.976 "traddr": "10.0.0.1", 00:13:05.976 "trsvcid": "42814" 00:13:05.976 }, 00:13:05.976 "auth": { 00:13:05.976 "state": "completed", 00:13:05.976 "digest": "sha512", 00:13:05.976 "dhgroup": "ffdhe8192" 00:13:05.976 } 00:13:05.976 } 00:13:05.976 ]' 00:13:05.976 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:05.976 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:05.976 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:05.976 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:05.976 12:37:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:06.234 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.234 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.234 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.492 12:37:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:13:07.057 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.058 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:13:07.058 12:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.058 12:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.058 12:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.058 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:07.058 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:13:07.058 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:13:07.058 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:07.058 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:07.058 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:07.315 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:13:07.315 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:07.315 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:07.315 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:07.315 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:07.315 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.315 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.315 12:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:07.315 12:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.315 12:37:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:07.315 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:07.315 12:37:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.248 00:13:08.248 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:08.248 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:08.248 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.248 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.248 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.248 12:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.248 12:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.248 12:37:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.248 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:08.248 { 00:13:08.248 "cntlid": 145, 00:13:08.248 "qid": 0, 00:13:08.248 "state": "enabled", 00:13:08.248 "thread": "nvmf_tgt_poll_group_000", 00:13:08.248 "listen_address": { 00:13:08.248 "trtype": "TCP", 00:13:08.248 "adrfam": "IPv4", 00:13:08.248 "traddr": "10.0.0.2", 00:13:08.248 "trsvcid": "4420" 00:13:08.248 }, 00:13:08.248 "peer_address": { 00:13:08.248 "trtype": "TCP", 00:13:08.248 "adrfam": "IPv4", 00:13:08.248 "traddr": "10.0.0.1", 00:13:08.248 "trsvcid": "42834" 00:13:08.248 }, 00:13:08.248 "auth": { 00:13:08.248 "state": "completed", 00:13:08.248 "digest": "sha512", 00:13:08.248 "dhgroup": "ffdhe8192" 00:13:08.248 } 00:13:08.248 } 00:13:08.248 ]' 00:13:08.248 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:08.505 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.505 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:08.505 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:08.505 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:08.506 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.506 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.506 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.763 12:37:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:00:NDFjNTIxNjcxZGJlYzJkYTY3ODRiOWE5MDlmOTAxYTFhMGYyODdhMmE2MGFkZDQ58CXGvw==: --dhchap-ctrl-secret DHHC-1:03:NDk4NDYxNTNjY2RlNjE5ZDQ2Y2VlZWRjZjYzMDhiYzU4YTA3ZmJjYjk2MjJkMTcxYjU5Yjk5ZDcyMjU5MzhmOaZfXz8=: 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:09.328 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:13:09.894 request: 00:13:09.894 { 00:13:09.894 "name": "nvme0", 00:13:09.894 "trtype": "tcp", 00:13:09.894 "traddr": "10.0.0.2", 00:13:09.894 "adrfam": "ipv4", 00:13:09.894 "trsvcid": "4420", 00:13:09.894 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:09.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5", 00:13:09.894 "prchk_reftag": false, 00:13:09.894 "prchk_guard": false, 00:13:09.894 "hdgst": false, 00:13:09.894 "ddgst": false, 00:13:09.894 "dhchap_key": "key2", 00:13:09.894 "method": "bdev_nvme_attach_controller", 00:13:09.894 "req_id": 1 00:13:09.894 } 00:13:09.894 Got JSON-RPC error response 00:13:09.894 response: 00:13:09.894 { 00:13:09.894 "code": -5, 00:13:09.894 "message": "Input/output error" 00:13:09.894 } 00:13:09.894 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:09.894 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:09.894 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:09.894 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:09.894 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:13:09.894 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.894 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.152 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.152 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.152 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.152 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.152 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.152 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:10.152 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:10.152 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:10.152 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:10.152 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.152 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:10.152 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.152 12:37:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:10.152 12:37:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:10.718 request: 00:13:10.718 { 00:13:10.718 "name": "nvme0", 00:13:10.718 "trtype": "tcp", 00:13:10.718 "traddr": "10.0.0.2", 00:13:10.718 "adrfam": "ipv4", 00:13:10.718 "trsvcid": "4420", 00:13:10.718 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:10.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5", 00:13:10.718 "prchk_reftag": false, 00:13:10.718 "prchk_guard": false, 00:13:10.718 "hdgst": false, 00:13:10.718 "ddgst": false, 00:13:10.718 "dhchap_key": "key1", 00:13:10.718 "dhchap_ctrlr_key": "ckey2", 00:13:10.718 "method": "bdev_nvme_attach_controller", 00:13:10.718 "req_id": 1 00:13:10.718 } 00:13:10.718 Got JSON-RPC error response 00:13:10.718 response: 00:13:10.718 { 00:13:10.718 "code": -5, 00:13:10.718 "message": "Input/output error" 00:13:10.718 } 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key1 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.718 12:37:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:11.285 request: 00:13:11.285 { 00:13:11.285 "name": "nvme0", 00:13:11.285 "trtype": "tcp", 00:13:11.285 "traddr": "10.0.0.2", 00:13:11.285 "adrfam": "ipv4", 00:13:11.285 "trsvcid": "4420", 00:13:11.285 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:11.285 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5", 00:13:11.285 "prchk_reftag": false, 00:13:11.285 "prchk_guard": false, 00:13:11.285 "hdgst": false, 00:13:11.285 "ddgst": false, 00:13:11.285 "dhchap_key": "key1", 00:13:11.285 "dhchap_ctrlr_key": "ckey1", 00:13:11.285 "method": "bdev_nvme_attach_controller", 00:13:11.285 "req_id": 1 00:13:11.285 } 00:13:11.285 Got JSON-RPC error response 00:13:11.285 response: 00:13:11.285 { 00:13:11.285 "code": -5, 00:13:11.285 "message": "Input/output error" 00:13:11.285 } 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 69535 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69535 ']' 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69535 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69535 00:13:11.285 killing process with pid 69535 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69535' 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69535 00:13:11.285 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69535 00:13:11.543 12:37:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:11.543 12:37:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:11.543 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:11.543 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.543 12:37:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=72585 00:13:11.543 12:37:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:11.543 12:37:37 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 72585 00:13:11.543 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72585 ']' 00:13:11.543 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.543 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:11.543 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.543 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:11.543 12:37:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.475 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.475 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:12.475 12:37:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:12.475 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:12.475 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.734 12:37:38 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.734 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:12.734 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 72585 00:13:12.734 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 72585 ']' 00:13:12.734 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.734 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.734 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.734 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.734 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:12.992 12:37:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:13.558 00:13:13.558 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:13.558 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:13.558 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.816 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.816 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.816 12:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.816 12:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.816 12:37:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.816 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:13.816 { 00:13:13.816 "cntlid": 1, 00:13:13.816 "qid": 0, 00:13:13.816 "state": "enabled", 00:13:13.816 "thread": "nvmf_tgt_poll_group_000", 00:13:13.816 "listen_address": { 00:13:13.816 "trtype": "TCP", 00:13:13.816 "adrfam": "IPv4", 00:13:13.816 "traddr": "10.0.0.2", 00:13:13.816 "trsvcid": "4420" 00:13:13.816 }, 00:13:13.816 "peer_address": { 00:13:13.816 "trtype": "TCP", 00:13:13.816 "adrfam": "IPv4", 00:13:13.816 "traddr": "10.0.0.1", 00:13:13.816 "trsvcid": "42872" 00:13:13.816 }, 00:13:13.816 "auth": { 00:13:13.816 "state": "completed", 00:13:13.816 "digest": "sha512", 00:13:13.816 "dhgroup": "ffdhe8192" 00:13:13.816 } 00:13:13.816 } 00:13:13.816 ]' 00:13:13.816 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:14.074 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:14.074 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:14.074 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:14.074 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:14.074 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.074 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.074 12:37:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.332 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid 16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-secret DHHC-1:03:YzM3ZjlmZGExOWIzOTIzYTM4MjY2YzJlMjc4YTJmNTI5YzM4YzRmMjA2N2Q1YmE4NTg4ZmQxOTVlZmUwZjRhNUvHC7o=: 00:13:14.898 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.898 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:13:14.898 12:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.898 12:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.898 12:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.898 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --dhchap-key key3 00:13:14.898 12:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:14.898 12:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.898 12:37:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:14.898 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:14.898 12:37:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:15.157 12:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:15.157 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:15.157 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:15.157 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:15.157 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:15.157 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:15.157 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:15.157 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:15.157 12:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:15.722 request: 00:13:15.722 { 00:13:15.722 "name": "nvme0", 00:13:15.722 "trtype": "tcp", 00:13:15.722 "traddr": "10.0.0.2", 00:13:15.722 "adrfam": "ipv4", 00:13:15.722 "trsvcid": "4420", 00:13:15.722 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:15.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5", 00:13:15.722 "prchk_reftag": false, 00:13:15.722 "prchk_guard": false, 00:13:15.722 "hdgst": false, 00:13:15.722 "ddgst": false, 00:13:15.722 "dhchap_key": "key3", 00:13:15.722 "method": "bdev_nvme_attach_controller", 00:13:15.722 "req_id": 1 00:13:15.722 } 00:13:15.722 Got JSON-RPC error response 00:13:15.722 response: 00:13:15.722 { 00:13:15.722 "code": -5, 00:13:15.722 "message": "Input/output error" 00:13:15.722 } 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:15.722 12:37:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:15.980 request: 00:13:15.980 { 00:13:15.980 "name": "nvme0", 00:13:15.980 "trtype": "tcp", 00:13:15.980 "traddr": "10.0.0.2", 00:13:15.980 "adrfam": "ipv4", 00:13:15.980 "trsvcid": "4420", 00:13:15.980 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:15.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5", 00:13:15.980 "prchk_reftag": false, 00:13:15.980 "prchk_guard": false, 00:13:15.980 "hdgst": false, 00:13:15.980 "ddgst": false, 00:13:15.980 "dhchap_key": "key3", 00:13:15.980 "method": "bdev_nvme_attach_controller", 00:13:15.980 "req_id": 1 00:13:15.980 } 00:13:15.980 Got JSON-RPC error response 00:13:15.980 response: 00:13:15.980 { 00:13:15.980 "code": -5, 00:13:15.980 "message": "Input/output error" 00:13:15.980 } 00:13:15.980 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:15.980 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:15.980 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:15.980 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:15.980 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:15.980 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:13:15.980 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:13:15.980 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:15.980 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:15.980 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:16.238 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:16.807 request: 00:13:16.807 { 00:13:16.807 "name": "nvme0", 00:13:16.807 "trtype": "tcp", 00:13:16.807 "traddr": "10.0.0.2", 00:13:16.807 "adrfam": "ipv4", 00:13:16.807 "trsvcid": "4420", 00:13:16.807 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:16.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5", 00:13:16.807 "prchk_reftag": false, 00:13:16.807 "prchk_guard": false, 00:13:16.807 "hdgst": false, 00:13:16.807 "ddgst": false, 00:13:16.807 "dhchap_key": "key0", 00:13:16.807 "dhchap_ctrlr_key": "key1", 00:13:16.807 "method": "bdev_nvme_attach_controller", 00:13:16.807 "req_id": 1 00:13:16.807 } 00:13:16.807 Got JSON-RPC error response 00:13:16.807 response: 00:13:16.807 { 00:13:16.807 "code": -5, 00:13:16.807 "message": "Input/output error" 00:13:16.807 } 00:13:16.807 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:13:16.807 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:16.807 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:16.807 12:37:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:16.807 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:16.807 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:13:16.807 00:13:17.064 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:13:17.064 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:13:17.064 12:37:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.064 12:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.064 12:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.064 12:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.321 12:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:13:17.321 12:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:13:17.321 12:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69572 00:13:17.321 12:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 69572 ']' 00:13:17.321 12:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 69572 00:13:17.321 12:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:17.578 12:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:17.578 12:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 69572 00:13:17.578 killing process with pid 69572 00:13:17.578 12:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:17.578 12:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:17.579 12:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69572' 00:13:17.579 12:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 69572 00:13:17.579 12:37:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 69572 00:13:17.836 12:37:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:13:17.836 12:37:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:17.836 12:37:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:13:18.094 12:37:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:18.094 12:37:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:13:18.094 12:37:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.094 12:37:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:18.094 rmmod nvme_tcp 00:13:18.094 rmmod nvme_fabrics 00:13:18.094 rmmod nvme_keyring 00:13:18.094 12:37:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.094 12:37:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:13:18.094 12:37:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:13:18.094 12:37:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 72585 ']' 00:13:18.094 12:37:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 72585 00:13:18.094 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 72585 ']' 00:13:18.094 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 72585 00:13:18.094 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:13:18.094 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:18.094 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72585 00:13:18.094 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:18.094 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:18.094 killing process with pid 72585 00:13:18.094 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72585' 00:13:18.094 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 72585 00:13:18.094 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 72585 00:13:18.352 12:37:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:18.352 12:37:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:18.352 12:37:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:18.352 12:37:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:18.352 12:37:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:18.352 12:37:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.352 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.352 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.352 12:37:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:18.352 12:37:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.oNo /tmp/spdk.key-sha256.CnX /tmp/spdk.key-sha384.Aa4 /tmp/spdk.key-sha512.egO /tmp/spdk.key-sha512.fXk /tmp/spdk.key-sha384.O6C /tmp/spdk.key-sha256.scu '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:13:18.352 00:13:18.352 real 2m50.276s 00:13:18.352 user 6m48.858s 00:13:18.352 sys 0m26.793s 00:13:18.352 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:18.352 12:37:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.352 ************************************ 00:13:18.352 END TEST nvmf_auth_target 00:13:18.352 ************************************ 00:13:18.352 12:37:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:18.352 12:37:44 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:13:18.352 12:37:44 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:18.352 12:37:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:18.352 12:37:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:18.352 12:37:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:18.352 ************************************ 00:13:18.352 START TEST nvmf_bdevio_no_huge 00:13:18.352 ************************************ 00:13:18.352 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:18.611 * Looking for test storage... 00:13:18.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.611 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:18.612 Cannot find device "nvmf_tgt_br" 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:18.612 Cannot find device "nvmf_tgt_br2" 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:18.612 Cannot find device "nvmf_tgt_br" 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:18.612 Cannot find device "nvmf_tgt_br2" 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:18.612 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:18.612 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:18.612 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:18.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:13:18.870 00:13:18.870 --- 10.0.0.2 ping statistics --- 00:13:18.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.870 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:18.870 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:18.870 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:13:18.870 00:13:18.870 --- 10.0.0.3 ping statistics --- 00:13:18.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.870 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:18.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:18.870 00:13:18.870 --- 10.0.0.1 ping statistics --- 00:13:18.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.870 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.870 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=72907 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 72907 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 72907 ']' 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:18.871 12:37:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:18.871 [2024-07-12 12:37:44.895156] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:13:18.871 [2024-07-12 12:37:44.895482] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:13:19.128 [2024-07-12 12:37:45.044904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:19.128 [2024-07-12 12:37:45.196696] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.128 [2024-07-12 12:37:45.196775] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.128 [2024-07-12 12:37:45.196799] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.128 [2024-07-12 12:37:45.196810] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.129 [2024-07-12 12:37:45.196830] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.129 [2024-07-12 12:37:45.197005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:19.129 [2024-07-12 12:37:45.197144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:19.129 [2024-07-12 12:37:45.197775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:19.129 [2024-07-12 12:37:45.197786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.385 [2024-07-12 12:37:45.203553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:19.948 [2024-07-12 12:37:45.927009] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:19.948 Malloc0 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:19.948 [2024-07-12 12:37:45.967152] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:19.948 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:19.948 { 00:13:19.948 "params": { 00:13:19.948 "name": "Nvme$subsystem", 00:13:19.948 "trtype": "$TEST_TRANSPORT", 00:13:19.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:19.948 "adrfam": "ipv4", 00:13:19.948 "trsvcid": "$NVMF_PORT", 00:13:19.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:19.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:19.948 "hdgst": ${hdgst:-false}, 00:13:19.948 "ddgst": ${ddgst:-false} 00:13:19.948 }, 00:13:19.949 "method": "bdev_nvme_attach_controller" 00:13:19.949 } 00:13:19.949 EOF 00:13:19.949 )") 00:13:19.949 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:13:19.949 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:13:19.949 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:13:19.949 12:37:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:19.949 "params": { 00:13:19.949 "name": "Nvme1", 00:13:19.949 "trtype": "tcp", 00:13:19.949 "traddr": "10.0.0.2", 00:13:19.949 "adrfam": "ipv4", 00:13:19.949 "trsvcid": "4420", 00:13:19.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:19.949 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:19.949 "hdgst": false, 00:13:19.949 "ddgst": false 00:13:19.949 }, 00:13:19.949 "method": "bdev_nvme_attach_controller" 00:13:19.949 }' 00:13:20.205 [2024-07-12 12:37:46.021471] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:13:20.205 [2024-07-12 12:37:46.021563] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid72943 ] 00:13:20.205 [2024-07-12 12:37:46.173344] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.460 [2024-07-12 12:37:46.318508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.460 [2024-07-12 12:37:46.318654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.460 [2024-07-12 12:37:46.318660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.460 [2024-07-12 12:37:46.333322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:20.460 I/O targets: 00:13:20.460 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:20.460 00:13:20.460 00:13:20.460 CUnit - A unit testing framework for C - Version 2.1-3 00:13:20.460 http://cunit.sourceforge.net/ 00:13:20.460 00:13:20.460 00:13:20.460 Suite: bdevio tests on: Nvme1n1 00:13:20.460 Test: blockdev write read block ...passed 00:13:20.460 Test: blockdev write zeroes read block ...passed 00:13:20.460 Test: blockdev write zeroes read no split ...passed 00:13:20.460 Test: blockdev write zeroes read split ...passed 00:13:20.717 Test: blockdev write zeroes read split partial ...passed 00:13:20.717 Test: blockdev reset ...[2024-07-12 12:37:46.541552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:20.717 [2024-07-12 12:37:46.541875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1452870 (9): Bad file descriptor 00:13:20.717 [2024-07-12 12:37:46.559390] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:20.717 passed 00:13:20.717 Test: blockdev write read 8 blocks ...passed 00:13:20.717 Test: blockdev write read size > 128k ...passed 00:13:20.717 Test: blockdev write read invalid size ...passed 00:13:20.717 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:20.717 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:20.717 Test: blockdev write read max offset ...passed 00:13:20.717 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:20.717 Test: blockdev writev readv 8 blocks ...passed 00:13:20.717 Test: blockdev writev readv 30 x 1block ...passed 00:13:20.717 Test: blockdev writev readv block ...passed 00:13:20.717 Test: blockdev writev readv size > 128k ...passed 00:13:20.717 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:20.717 Test: blockdev comparev and writev ...[2024-07-12 12:37:46.570836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:20.717 [2024-07-12 12:37:46.571033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:20.717 [2024-07-12 12:37:46.571069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:20.717 [2024-07-12 12:37:46.571085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:20.717 [2024-07-12 12:37:46.571432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:20.717 [2024-07-12 12:37:46.571457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:20.717 [2024-07-12 12:37:46.571478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:20.717 [2024-07-12 12:37:46.571491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:20.717 [2024-07-12 12:37:46.571821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:20.717 [2024-07-12 12:37:46.571848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:20.717 [2024-07-12 12:37:46.571870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:20.717 [2024-07-12 12:37:46.571882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:20.717 [2024-07-12 12:37:46.572204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:20.717 [2024-07-12 12:37:46.572237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:20.717 [2024-07-12 12:37:46.572259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:20.717 [2024-07-12 12:37:46.572272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:20.717 passed 00:13:20.717 Test: blockdev nvme passthru rw ...passed 00:13:20.717 Test: blockdev nvme passthru vendor specific ...[2024-07-12 12:37:46.573463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:20.717 [2024-07-12 12:37:46.573498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:20.717 [2024-07-12 12:37:46.573629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:20.717 [2024-07-12 12:37:46.573656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:20.717 passed 00:13:20.717 Test: blockdev nvme admin passthru ...[2024-07-12 12:37:46.573773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:20.717 [2024-07-12 12:37:46.573798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:20.717 [2024-07-12 12:37:46.573925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:20.717 [2024-07-12 12:37:46.573944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:20.717 passed 00:13:20.717 Test: blockdev copy ...passed 00:13:20.717 00:13:20.717 Run Summary: Type Total Ran Passed Failed Inactive 00:13:20.717 suites 1 1 n/a 0 0 00:13:20.717 tests 23 23 23 0 0 00:13:20.717 asserts 152 152 152 0 n/a 00:13:20.717 00:13:20.717 Elapsed time = 0.183 seconds 00:13:20.974 12:37:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:20.974 12:37:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.974 12:37:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:20.974 12:37:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.974 12:37:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:20.974 12:37:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:13:20.974 12:37:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:20.974 12:37:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:13:20.974 12:37:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:20.974 12:37:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:13:20.974 12:37:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:20.974 12:37:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:20.974 rmmod nvme_tcp 00:13:20.974 rmmod nvme_fabrics 00:13:20.974 rmmod nvme_keyring 00:13:20.974 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:21.232 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:13:21.232 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:13:21.232 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 72907 ']' 00:13:21.232 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 72907 00:13:21.232 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 72907 ']' 00:13:21.232 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 72907 00:13:21.232 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:13:21.232 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:21.232 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72907 00:13:21.232 killing process with pid 72907 00:13:21.232 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:13:21.232 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:13:21.232 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72907' 00:13:21.232 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 72907 00:13:21.232 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 72907 00:13:21.490 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:21.490 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:21.490 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:21.490 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:21.490 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:21.490 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.490 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.490 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.490 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:21.490 00:13:21.490 real 0m3.170s 00:13:21.490 user 0m10.322s 00:13:21.490 sys 0m1.279s 00:13:21.490 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:21.490 ************************************ 00:13:21.490 END TEST nvmf_bdevio_no_huge 00:13:21.490 ************************************ 00:13:21.490 12:37:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:13:21.751 12:37:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:21.752 12:37:47 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:21.752 12:37:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:21.752 12:37:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:21.752 12:37:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:21.752 ************************************ 00:13:21.752 START TEST nvmf_tls 00:13:21.752 ************************************ 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:13:21.752 * Looking for test storage... 00:13:21.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:21.752 Cannot find device "nvmf_tgt_br" 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:21.752 Cannot find device "nvmf_tgt_br2" 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:21.752 Cannot find device "nvmf_tgt_br" 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:21.752 Cannot find device "nvmf_tgt_br2" 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:21.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:21.752 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:21.752 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:22.019 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:22.019 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:22.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:13:22.020 00:13:22.020 --- 10.0.0.2 ping statistics --- 00:13:22.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.020 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:22.020 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:22.020 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:13:22.020 00:13:22.020 --- 10.0.0.3 ping statistics --- 00:13:22.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.020 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:22.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:22.020 00:13:22.020 --- 10.0.0.1 ping statistics --- 00:13:22.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.020 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73123 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73123 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73123 ']' 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.020 12:37:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:22.020 [2024-07-12 12:37:48.045096] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:13:22.020 [2024-07-12 12:37:48.045175] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.277 [2024-07-12 12:37:48.183372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.277 [2024-07-12 12:37:48.304595] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.277 [2024-07-12 12:37:48.304668] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.277 [2024-07-12 12:37:48.304685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.277 [2024-07-12 12:37:48.304696] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.277 [2024-07-12 12:37:48.304706] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.277 [2024-07-12 12:37:48.304743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.209 12:37:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.209 12:37:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:23.209 12:37:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:23.209 12:37:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:23.209 12:37:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:23.209 12:37:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.209 12:37:49 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:13:23.209 12:37:49 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:13:23.466 true 00:13:23.466 12:37:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:13:23.466 12:37:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:23.723 12:37:49 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:13:23.723 12:37:49 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:13:23.723 12:37:49 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:23.980 12:37:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:23.980 12:37:49 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:13:24.238 12:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:13:24.238 12:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:13:24.238 12:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:13:24.495 12:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:24.495 12:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:13:24.753 12:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:13:24.753 12:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:13:24.753 12:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:24.753 12:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:13:24.753 12:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:13:24.753 12:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:13:24.753 12:37:50 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:13:25.010 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:25.010 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:13:25.276 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:13:25.276 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:13:25.276 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:13:25.572 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:13:25.572 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:13:25.830 12:37:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:26.089 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:26.089 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:13:26.089 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.qeG9HMEjVa 00:13:26.089 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:13:26.089 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.9U4WV5dbMk 00:13:26.089 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:13:26.089 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:13:26.089 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.qeG9HMEjVa 00:13:26.089 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.9U4WV5dbMk 00:13:26.089 12:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:13:26.347 12:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:13:26.605 [2024-07-12 12:37:52.474729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:26.605 12:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.qeG9HMEjVa 00:13:26.605 12:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.qeG9HMEjVa 00:13:26.605 12:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:26.863 [2024-07-12 12:37:52.804237] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.864 12:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:27.121 12:37:53 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:27.377 [2024-07-12 12:37:53.264372] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:27.377 [2024-07-12 12:37:53.264725] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.377 12:37:53 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:27.635 malloc0 00:13:27.635 12:37:53 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:27.892 12:37:53 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qeG9HMEjVa 00:13:28.150 [2024-07-12 12:37:54.030146] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:28.150 12:37:54 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.qeG9HMEjVa 00:13:40.356 Initializing NVMe Controllers 00:13:40.356 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:40.356 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:40.356 Initialization complete. Launching workers. 00:13:40.356 ======================================================== 00:13:40.356 Latency(us) 00:13:40.356 Device Information : IOPS MiB/s Average min max 00:13:40.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9177.38 35.85 6975.52 1290.11 11583.95 00:13:40.356 ======================================================== 00:13:40.356 Total : 9177.38 35.85 6975.52 1290.11 11583.95 00:13:40.356 00:13:40.356 12:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qeG9HMEjVa 00:13:40.356 12:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:40.356 12:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:40.356 12:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:40.356 12:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qeG9HMEjVa' 00:13:40.356 12:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:40.356 12:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73354 00:13:40.356 12:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:40.356 12:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:40.356 12:38:04 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73354 /var/tmp/bdevperf.sock 00:13:40.356 12:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73354 ']' 00:13:40.356 12:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:40.356 12:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:40.356 12:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:40.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:40.356 12:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:40.356 12:38:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:40.356 [2024-07-12 12:38:04.305824] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:13:40.356 [2024-07-12 12:38:04.306762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73354 ] 00:13:40.356 [2024-07-12 12:38:04.447726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.356 [2024-07-12 12:38:04.601061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.356 [2024-07-12 12:38:04.658533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:40.356 12:38:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.356 12:38:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:40.356 12:38:05 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qeG9HMEjVa 00:13:40.356 [2024-07-12 12:38:05.544688] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:40.356 [2024-07-12 12:38:05.544827] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:40.356 TLSTESTn1 00:13:40.356 12:38:05 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:40.356 Running I/O for 10 seconds... 00:13:50.325 00:13:50.325 Latency(us) 00:13:50.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.325 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:50.325 Verification LBA range: start 0x0 length 0x2000 00:13:50.325 TLSTESTn1 : 10.02 3893.50 15.21 0.00 0.00 32804.96 8460.10 37653.41 00:13:50.325 =================================================================================================================== 00:13:50.325 Total : 3893.50 15.21 0.00 0.00 32804.96 8460.10 37653.41 00:13:50.325 0 00:13:50.325 12:38:15 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:50.325 12:38:15 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73354 00:13:50.325 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73354 ']' 00:13:50.325 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73354 00:13:50.325 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:50.325 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:50.325 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73354 00:13:50.325 killing process with pid 73354 00:13:50.325 Received shutdown signal, test time was about 10.000000 seconds 00:13:50.325 00:13:50.325 Latency(us) 00:13:50.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.325 =================================================================================================================== 00:13:50.325 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:50.325 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:50.325 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:50.325 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73354' 00:13:50.325 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73354 00:13:50.325 [2024-07-12 12:38:15.829310] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:50.325 12:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73354 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9U4WV5dbMk 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9U4WV5dbMk 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9U4WV5dbMk 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9U4WV5dbMk' 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73488 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73488 /var/tmp/bdevperf.sock 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73488 ']' 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:50.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:50.325 12:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:50.326 [2024-07-12 12:38:16.174759] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:13:50.326 [2024-07-12 12:38:16.175577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73488 ] 00:13:50.326 [2024-07-12 12:38:16.312702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.584 [2024-07-12 12:38:16.449795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:50.584 [2024-07-12 12:38:16.506857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:51.194 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:51.194 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:51.194 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9U4WV5dbMk 00:13:51.452 [2024-07-12 12:38:17.385989] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:51.452 [2024-07-12 12:38:17.386164] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:51.452 [2024-07-12 12:38:17.391392] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:51.452 [2024-07-12 12:38:17.391937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e61f0 (107): Transport endpoint is not connected 00:13:51.452 [2024-07-12 12:38:17.392924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e61f0 (9): Bad file descriptor 00:13:51.452 [2024-07-12 12:38:17.393920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:51.452 [2024-07-12 12:38:17.393960] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:51.452 [2024-07-12 12:38:17.393973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:51.452 request: 00:13:51.452 { 00:13:51.452 "name": "TLSTEST", 00:13:51.452 "trtype": "tcp", 00:13:51.452 "traddr": "10.0.0.2", 00:13:51.452 "adrfam": "ipv4", 00:13:51.452 "trsvcid": "4420", 00:13:51.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:51.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:51.452 "prchk_reftag": false, 00:13:51.452 "prchk_guard": false, 00:13:51.452 "hdgst": false, 00:13:51.452 "ddgst": false, 00:13:51.452 "psk": "/tmp/tmp.9U4WV5dbMk", 00:13:51.452 "method": "bdev_nvme_attach_controller", 00:13:51.452 "req_id": 1 00:13:51.452 } 00:13:51.452 Got JSON-RPC error response 00:13:51.452 response: 00:13:51.452 { 00:13:51.452 "code": -5, 00:13:51.452 "message": "Input/output error" 00:13:51.453 } 00:13:51.453 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73488 00:13:51.453 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73488 ']' 00:13:51.453 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73488 00:13:51.453 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:51.453 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:51.453 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73488 00:13:51.453 killing process with pid 73488 00:13:51.453 Received shutdown signal, test time was about 10.000000 seconds 00:13:51.453 00:13:51.453 Latency(us) 00:13:51.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.453 =================================================================================================================== 00:13:51.453 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:51.453 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:51.453 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:51.453 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73488' 00:13:51.453 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73488 00:13:51.453 [2024-07-12 12:38:17.449722] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:51.453 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73488 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qeG9HMEjVa 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qeG9HMEjVa 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.qeG9HMEjVa 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qeG9HMEjVa' 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73510 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73510 /var/tmp/bdevperf.sock 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73510 ']' 00:13:51.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:51.710 12:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.968 [2024-07-12 12:38:17.792176] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:13:51.968 [2024-07-12 12:38:17.792294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73510 ] 00:13:51.968 [2024-07-12 12:38:17.933256] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.227 [2024-07-12 12:38:18.082501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.227 [2024-07-12 12:38:18.141961] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:52.791 12:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.791 12:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:52.791 12:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.qeG9HMEjVa 00:13:53.048 [2024-07-12 12:38:18.961541] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:53.048 [2024-07-12 12:38:18.961718] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:53.048 [2024-07-12 12:38:18.969422] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:53.048 [2024-07-12 12:38:18.969525] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:53.048 [2024-07-12 12:38:18.969601] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:53.048 [2024-07-12 12:38:18.969894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187b1f0 (107): Transport endpoint is not connected 00:13:53.048 [2024-07-12 12:38:18.970874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187b1f0 (9): Bad file descriptor 00:13:53.048 [2024-07-12 12:38:18.971871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:53.048 [2024-07-12 12:38:18.971937] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:53.048 [2024-07-12 12:38:18.971976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:53.048 request: 00:13:53.048 { 00:13:53.048 "name": "TLSTEST", 00:13:53.048 "trtype": "tcp", 00:13:53.048 "traddr": "10.0.0.2", 00:13:53.048 "adrfam": "ipv4", 00:13:53.048 "trsvcid": "4420", 00:13:53.048 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.048 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:53.048 "prchk_reftag": false, 00:13:53.048 "prchk_guard": false, 00:13:53.048 "hdgst": false, 00:13:53.048 "ddgst": false, 00:13:53.048 "psk": "/tmp/tmp.qeG9HMEjVa", 00:13:53.048 "method": "bdev_nvme_attach_controller", 00:13:53.048 "req_id": 1 00:13:53.048 } 00:13:53.048 Got JSON-RPC error response 00:13:53.048 response: 00:13:53.048 { 00:13:53.048 "code": -5, 00:13:53.048 "message": "Input/output error" 00:13:53.048 } 00:13:53.048 12:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73510 00:13:53.048 12:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73510 ']' 00:13:53.048 12:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73510 00:13:53.048 12:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:53.048 12:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:53.048 12:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73510 00:13:53.048 killing process with pid 73510 00:13:53.048 Received shutdown signal, test time was about 10.000000 seconds 00:13:53.048 00:13:53.048 Latency(us) 00:13:53.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.048 =================================================================================================================== 00:13:53.048 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:53.049 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:53.049 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:53.049 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73510' 00:13:53.049 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73510 00:13:53.049 [2024-07-12 12:38:19.022807] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:53.049 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73510 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qeG9HMEjVa 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qeG9HMEjVa 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:53.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.qeG9HMEjVa 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.qeG9HMEjVa' 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73543 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73543 /var/tmp/bdevperf.sock 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73543 ']' 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.307 12:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:53.307 [2024-07-12 12:38:19.360527] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:13:53.307 [2024-07-12 12:38:19.360647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73543 ] 00:13:53.565 [2024-07-12 12:38:19.499399] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.824 [2024-07-12 12:38:19.646512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.824 [2024-07-12 12:38:19.704537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:54.390 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:54.390 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:54.390 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.qeG9HMEjVa 00:13:54.648 [2024-07-12 12:38:20.568298] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:54.648 [2024-07-12 12:38:20.568500] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:13:54.648 [2024-07-12 12:38:20.579619] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:54.648 [2024-07-12 12:38:20.579677] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:54.648 [2024-07-12 12:38:20.579752] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:54.648 [2024-07-12 12:38:20.580181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15391f0 (107): Transport endpoint is not connected 00:13:54.648 [2024-07-12 12:38:20.581170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15391f0 (9): Bad file descriptor 00:13:54.648 [2024-07-12 12:38:20.582166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:13:54.648 [2024-07-12 12:38:20.582190] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:54.648 [2024-07-12 12:38:20.582205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:13:54.648 request: 00:13:54.648 { 00:13:54.648 "name": "TLSTEST", 00:13:54.648 "trtype": "tcp", 00:13:54.648 "traddr": "10.0.0.2", 00:13:54.648 "adrfam": "ipv4", 00:13:54.648 "trsvcid": "4420", 00:13:54.648 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:54.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:54.648 "prchk_reftag": false, 00:13:54.648 "prchk_guard": false, 00:13:54.648 "hdgst": false, 00:13:54.648 "ddgst": false, 00:13:54.648 "psk": "/tmp/tmp.qeG9HMEjVa", 00:13:54.648 "method": "bdev_nvme_attach_controller", 00:13:54.648 "req_id": 1 00:13:54.648 } 00:13:54.648 Got JSON-RPC error response 00:13:54.648 response: 00:13:54.648 { 00:13:54.648 "code": -5, 00:13:54.648 "message": "Input/output error" 00:13:54.648 } 00:13:54.648 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73543 00:13:54.648 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73543 ']' 00:13:54.648 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73543 00:13:54.648 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:54.648 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:54.648 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73543 00:13:54.648 killing process with pid 73543 00:13:54.648 Received shutdown signal, test time was about 10.000000 seconds 00:13:54.648 00:13:54.648 Latency(us) 00:13:54.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.648 =================================================================================================================== 00:13:54.648 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:54.648 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:54.648 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:54.648 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73543' 00:13:54.648 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73543 00:13:54.649 [2024-07-12 12:38:20.630833] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:13:54.649 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73543 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73571 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73571 /var/tmp/bdevperf.sock 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73571 ']' 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:54.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:54.907 12:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:54.907 [2024-07-12 12:38:20.969837] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:13:54.907 [2024-07-12 12:38:20.969974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73571 ] 00:13:55.165 [2024-07-12 12:38:21.110792] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.423 [2024-07-12 12:38:21.264607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.423 [2024-07-12 12:38:21.323195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:55.988 12:38:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:55.988 12:38:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:55.988 12:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:56.245 [2024-07-12 12:38:22.199901] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:56.245 [2024-07-12 12:38:22.201495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1191c00 (9): Bad file descriptor 00:13:56.245 [2024-07-12 12:38:22.202486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:13:56.245 [2024-07-12 12:38:22.202531] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:13:56.245 [2024-07-12 12:38:22.202554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:56.245 request: 00:13:56.245 { 00:13:56.245 "name": "TLSTEST", 00:13:56.245 "trtype": "tcp", 00:13:56.245 "traddr": "10.0.0.2", 00:13:56.245 "adrfam": "ipv4", 00:13:56.245 "trsvcid": "4420", 00:13:56.245 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:56.245 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:56.245 "prchk_reftag": false, 00:13:56.245 "prchk_guard": false, 00:13:56.245 "hdgst": false, 00:13:56.245 "ddgst": false, 00:13:56.245 "method": "bdev_nvme_attach_controller", 00:13:56.245 "req_id": 1 00:13:56.245 } 00:13:56.245 Got JSON-RPC error response 00:13:56.245 response: 00:13:56.245 { 00:13:56.245 "code": -5, 00:13:56.245 "message": "Input/output error" 00:13:56.245 } 00:13:56.245 12:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73571 00:13:56.245 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73571 ']' 00:13:56.245 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73571 00:13:56.245 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:56.245 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:56.245 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73571 00:13:56.245 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:13:56.245 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:13:56.245 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73571' 00:13:56.245 killing process with pid 73571 00:13:56.245 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73571 00:13:56.245 Received shutdown signal, test time was about 10.000000 seconds 00:13:56.245 00:13:56.245 Latency(us) 00:13:56.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.245 =================================================================================================================== 00:13:56.245 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:56.245 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73571 00:13:56.502 12:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:13:56.503 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:13:56.503 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:56.503 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:56.503 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:56.503 12:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 73123 00:13:56.503 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73123 ']' 00:13:56.503 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73123 00:13:56.503 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:13:56.503 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:56.503 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73123 00:13:56.503 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:56.503 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:56.503 killing process with pid 73123 00:13:56.503 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73123' 00:13:56.503 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73123 00:13:56.503 [2024-07-12 12:38:22.550933] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:13:56.503 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73123 00:13:56.760 12:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:56.760 12:38:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:56.760 12:38:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:13:56.760 12:38:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:13:56.760 12:38:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:56.760 12:38:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:13:56.760 12:38:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.duiv4dqpxY 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.duiv4dqpxY 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73608 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73608 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73608 ']' 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:57.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:57.018 12:38:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.018 [2024-07-12 12:38:22.916838] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:13:57.018 [2024-07-12 12:38:22.916944] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.018 [2024-07-12 12:38:23.056351] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.276 [2024-07-12 12:38:23.158261] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.276 [2024-07-12 12:38:23.158344] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.276 [2024-07-12 12:38:23.158371] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.276 [2024-07-12 12:38:23.158381] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.276 [2024-07-12 12:38:23.158391] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.276 [2024-07-12 12:38:23.158429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.276 [2024-07-12 12:38:23.216121] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:13:57.842 12:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:57.842 12:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:13:57.842 12:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:57.842 12:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:57.842 12:38:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.842 12:38:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.842 12:38:23 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.duiv4dqpxY 00:13:57.842 12:38:23 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.duiv4dqpxY 00:13:57.842 12:38:23 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:58.100 [2024-07-12 12:38:24.096771] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.100 12:38:24 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:58.358 12:38:24 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:13:58.642 [2024-07-12 12:38:24.624885] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:58.642 [2024-07-12 12:38:24.625185] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.642 12:38:24 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:58.900 malloc0 00:13:58.900 12:38:24 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:59.158 12:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.duiv4dqpxY 00:13:59.415 [2024-07-12 12:38:25.346564] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:13:59.415 12:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.duiv4dqpxY 00:13:59.415 12:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:59.415 12:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:59.415 12:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:59.415 12:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.duiv4dqpxY' 00:13:59.415 12:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:59.415 12:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:59.415 12:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73663 00:13:59.415 12:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:59.415 12:38:25 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73663 /var/tmp/bdevperf.sock 00:13:59.415 12:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73663 ']' 00:13:59.415 12:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:59.415 12:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:59.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:59.415 12:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:59.415 12:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:59.415 12:38:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.415 [2024-07-12 12:38:25.412070] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:13:59.415 [2024-07-12 12:38:25.412190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73663 ] 00:13:59.673 [2024-07-12 12:38:25.546820] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.673 [2024-07-12 12:38:25.705909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.931 [2024-07-12 12:38:25.765417] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:00.496 12:38:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:00.496 12:38:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:00.496 12:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.duiv4dqpxY 00:14:00.754 [2024-07-12 12:38:26.605050] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:00.754 [2024-07-12 12:38:26.605202] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:00.754 TLSTESTn1 00:14:00.754 12:38:26 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:00.754 Running I/O for 10 seconds... 00:14:13.010 00:14:13.010 Latency(us) 00:14:13.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.010 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:13.010 Verification LBA range: start 0x0 length 0x2000 00:14:13.010 TLSTESTn1 : 10.02 4030.08 15.74 0.00 0.00 31700.93 6225.92 25141.99 00:14:13.010 =================================================================================================================== 00:14:13.010 Total : 4030.08 15.74 0.00 0.00 31700.93 6225.92 25141.99 00:14:13.010 0 00:14:13.010 12:38:36 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:13.010 12:38:36 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 73663 00:14:13.010 12:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73663 ']' 00:14:13.010 12:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73663 00:14:13.010 12:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:13.010 12:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:13.010 12:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73663 00:14:13.010 killing process with pid 73663 00:14:13.010 Received shutdown signal, test time was about 10.000000 seconds 00:14:13.010 00:14:13.010 Latency(us) 00:14:13.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.010 =================================================================================================================== 00:14:13.010 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:13.010 12:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:13.010 12:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:13.010 12:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73663' 00:14:13.010 12:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73663 00:14:13.010 [2024-07-12 12:38:36.885914] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:13.010 12:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73663 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.duiv4dqpxY 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.duiv4dqpxY 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.duiv4dqpxY 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.duiv4dqpxY 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.duiv4dqpxY' 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=73792 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 73792 /var/tmp/bdevperf.sock 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73792 ']' 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:13.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.010 12:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.010 [2024-07-12 12:38:37.236513] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:14:13.010 [2024-07-12 12:38:37.237027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73792 ] 00:14:13.010 [2024-07-12 12:38:37.377970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.010 [2024-07-12 12:38:37.525527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.010 [2024-07-12 12:38:37.583800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.duiv4dqpxY 00:14:13.010 [2024-07-12 12:38:38.395152] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:13.010 [2024-07-12 12:38:38.395251] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:13.010 [2024-07-12 12:38:38.395262] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.duiv4dqpxY 00:14:13.010 request: 00:14:13.010 { 00:14:13.010 "name": "TLSTEST", 00:14:13.010 "trtype": "tcp", 00:14:13.010 "traddr": "10.0.0.2", 00:14:13.010 "adrfam": "ipv4", 00:14:13.010 "trsvcid": "4420", 00:14:13.010 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.010 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:13.010 "prchk_reftag": false, 00:14:13.010 "prchk_guard": false, 00:14:13.010 "hdgst": false, 00:14:13.010 "ddgst": false, 00:14:13.010 "psk": "/tmp/tmp.duiv4dqpxY", 00:14:13.010 "method": "bdev_nvme_attach_controller", 00:14:13.010 "req_id": 1 00:14:13.010 } 00:14:13.010 Got JSON-RPC error response 00:14:13.010 response: 00:14:13.010 { 00:14:13.010 "code": -1, 00:14:13.010 "message": "Operation not permitted" 00:14:13.010 } 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 73792 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73792 ']' 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73792 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73792 00:14:13.010 killing process with pid 73792 00:14:13.010 Received shutdown signal, test time was about 10.000000 seconds 00:14:13.010 00:14:13.010 Latency(us) 00:14:13.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.010 =================================================================================================================== 00:14:13.010 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73792' 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73792 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73792 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 73608 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73608 ']' 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73608 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73608 00:14:13.010 killing process with pid 73608 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73608' 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73608 00:14:13.010 [2024-07-12 12:38:38.747549] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73608 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:13.010 12:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.010 12:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73830 00:14:13.010 12:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:13.010 12:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73830 00:14:13.010 12:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73830 ']' 00:14:13.010 12:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.010 12:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.010 12:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.010 12:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.010 12:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:13.010 [2024-07-12 12:38:39.057331] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:14:13.010 [2024-07-12 12:38:39.057454] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.268 [2024-07-12 12:38:39.193777] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.268 [2024-07-12 12:38:39.313069] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.268 [2024-07-12 12:38:39.313139] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.268 [2024-07-12 12:38:39.313167] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.268 [2024-07-12 12:38:39.313175] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.268 [2024-07-12 12:38:39.313183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.268 [2024-07-12 12:38:39.313211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.526 [2024-07-12 12:38:39.369674] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:14.092 12:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:14.092 12:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:14.092 12:38:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:14.092 12:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:14.092 12:38:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.092 12:38:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.092 12:38:40 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.duiv4dqpxY 00:14:14.092 12:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:14.092 12:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.duiv4dqpxY 00:14:14.092 12:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:14:14.092 12:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.092 12:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:14:14.092 12:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:14.092 12:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.duiv4dqpxY 00:14:14.092 12:38:40 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.duiv4dqpxY 00:14:14.092 12:38:40 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:14.349 [2024-07-12 12:38:40.311631] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.349 12:38:40 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:14.607 12:38:40 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:14.865 [2024-07-12 12:38:40.855771] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:14.865 [2024-07-12 12:38:40.856076] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.865 12:38:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:15.122 malloc0 00:14:15.122 12:38:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:15.380 12:38:41 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.duiv4dqpxY 00:14:15.638 [2024-07-12 12:38:41.564704] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:15.638 [2024-07-12 12:38:41.564772] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:15.638 [2024-07-12 12:38:41.564839] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:15.638 request: 00:14:15.638 { 00:14:15.638 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:15.638 "host": "nqn.2016-06.io.spdk:host1", 00:14:15.638 "psk": "/tmp/tmp.duiv4dqpxY", 00:14:15.638 "method": "nvmf_subsystem_add_host", 00:14:15.638 "req_id": 1 00:14:15.638 } 00:14:15.638 Got JSON-RPC error response 00:14:15.638 response: 00:14:15.638 { 00:14:15.638 "code": -32603, 00:14:15.638 "message": "Internal error" 00:14:15.638 } 00:14:15.638 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:15.638 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:15.638 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:15.638 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:15.638 12:38:41 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 73830 00:14:15.638 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73830 ']' 00:14:15.638 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73830 00:14:15.638 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:15.638 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:15.638 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73830 00:14:15.638 killing process with pid 73830 00:14:15.638 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:15.638 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:15.638 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73830' 00:14:15.638 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73830 00:14:15.638 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73830 00:14:15.896 12:38:41 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.duiv4dqpxY 00:14:15.896 12:38:41 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:15.896 12:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:15.896 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:15.896 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.896 12:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73893 00:14:15.896 12:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:15.896 12:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73893 00:14:15.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.896 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73893 ']' 00:14:15.896 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.896 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.896 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.896 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.896 12:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.896 [2024-07-12 12:38:41.933024] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:14:15.896 [2024-07-12 12:38:41.933391] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.155 [2024-07-12 12:38:42.072592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.155 [2024-07-12 12:38:42.182873] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.155 [2024-07-12 12:38:42.183145] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.155 [2024-07-12 12:38:42.183301] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.155 [2024-07-12 12:38:42.183475] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.155 [2024-07-12 12:38:42.183488] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.155 [2024-07-12 12:38:42.183518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.414 [2024-07-12 12:38:42.240946] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:16.980 12:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:16.980 12:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:16.981 12:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:16.981 12:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:16.981 12:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.981 12:38:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.981 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.duiv4dqpxY 00:14:16.981 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.duiv4dqpxY 00:14:16.981 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:17.238 [2024-07-12 12:38:43.264156] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.238 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:17.495 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:17.754 [2024-07-12 12:38:43.748236] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:17.754 [2024-07-12 12:38:43.748540] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.754 12:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:18.012 malloc0 00:14:18.012 12:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:18.271 12:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.duiv4dqpxY 00:14:18.529 [2024-07-12 12:38:44.481152] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:18.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.529 12:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=73953 00:14:18.529 12:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:18.529 12:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:18.529 12:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 73953 /var/tmp/bdevperf.sock 00:14:18.529 12:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73953 ']' 00:14:18.529 12:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.529 12:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.529 12:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.530 12:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.530 12:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:18.530 [2024-07-12 12:38:44.556760] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:14:18.530 [2024-07-12 12:38:44.557105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73953 ] 00:14:18.788 [2024-07-12 12:38:44.699699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.046 [2024-07-12 12:38:44.865023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.046 [2024-07-12 12:38:44.924375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:19.613 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.613 12:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:19.613 12:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.duiv4dqpxY 00:14:19.613 [2024-07-12 12:38:45.673936] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:19.613 [2024-07-12 12:38:45.674089] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:19.871 TLSTESTn1 00:14:19.871 12:38:45 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:14:20.129 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:14:20.129 "subsystems": [ 00:14:20.129 { 00:14:20.129 "subsystem": "keyring", 00:14:20.129 "config": [] 00:14:20.129 }, 00:14:20.129 { 00:14:20.129 "subsystem": "iobuf", 00:14:20.129 "config": [ 00:14:20.129 { 00:14:20.129 "method": "iobuf_set_options", 00:14:20.129 "params": { 00:14:20.129 "small_pool_count": 8192, 00:14:20.129 "large_pool_count": 1024, 00:14:20.129 "small_bufsize": 8192, 00:14:20.129 "large_bufsize": 135168 00:14:20.129 } 00:14:20.129 } 00:14:20.129 ] 00:14:20.129 }, 00:14:20.129 { 00:14:20.129 "subsystem": "sock", 00:14:20.129 "config": [ 00:14:20.129 { 00:14:20.129 "method": "sock_set_default_impl", 00:14:20.129 "params": { 00:14:20.129 "impl_name": "uring" 00:14:20.129 } 00:14:20.129 }, 00:14:20.129 { 00:14:20.129 "method": "sock_impl_set_options", 00:14:20.129 "params": { 00:14:20.129 "impl_name": "ssl", 00:14:20.129 "recv_buf_size": 4096, 00:14:20.129 "send_buf_size": 4096, 00:14:20.129 "enable_recv_pipe": true, 00:14:20.129 "enable_quickack": false, 00:14:20.129 "enable_placement_id": 0, 00:14:20.129 "enable_zerocopy_send_server": true, 00:14:20.129 "enable_zerocopy_send_client": false, 00:14:20.129 "zerocopy_threshold": 0, 00:14:20.129 "tls_version": 0, 00:14:20.129 "enable_ktls": false 00:14:20.129 } 00:14:20.129 }, 00:14:20.129 { 00:14:20.129 "method": "sock_impl_set_options", 00:14:20.129 "params": { 00:14:20.129 "impl_name": "posix", 00:14:20.129 "recv_buf_size": 2097152, 00:14:20.129 "send_buf_size": 2097152, 00:14:20.129 "enable_recv_pipe": true, 00:14:20.129 "enable_quickack": false, 00:14:20.129 "enable_placement_id": 0, 00:14:20.129 "enable_zerocopy_send_server": true, 00:14:20.129 "enable_zerocopy_send_client": false, 00:14:20.129 "zerocopy_threshold": 0, 00:14:20.129 "tls_version": 0, 00:14:20.129 "enable_ktls": false 00:14:20.129 } 00:14:20.129 }, 00:14:20.129 { 00:14:20.129 "method": "sock_impl_set_options", 00:14:20.129 "params": { 00:14:20.129 "impl_name": "uring", 00:14:20.129 "recv_buf_size": 2097152, 00:14:20.129 "send_buf_size": 2097152, 00:14:20.129 "enable_recv_pipe": true, 00:14:20.129 "enable_quickack": false, 00:14:20.129 "enable_placement_id": 0, 00:14:20.129 "enable_zerocopy_send_server": false, 00:14:20.129 "enable_zerocopy_send_client": false, 00:14:20.129 "zerocopy_threshold": 0, 00:14:20.129 "tls_version": 0, 00:14:20.129 "enable_ktls": false 00:14:20.129 } 00:14:20.129 } 00:14:20.129 ] 00:14:20.129 }, 00:14:20.129 { 00:14:20.129 "subsystem": "vmd", 00:14:20.129 "config": [] 00:14:20.129 }, 00:14:20.129 { 00:14:20.129 "subsystem": "accel", 00:14:20.129 "config": [ 00:14:20.129 { 00:14:20.129 "method": "accel_set_options", 00:14:20.129 "params": { 00:14:20.129 "small_cache_size": 128, 00:14:20.129 "large_cache_size": 16, 00:14:20.129 "task_count": 2048, 00:14:20.129 "sequence_count": 2048, 00:14:20.129 "buf_count": 2048 00:14:20.129 } 00:14:20.129 } 00:14:20.129 ] 00:14:20.129 }, 00:14:20.129 { 00:14:20.129 "subsystem": "bdev", 00:14:20.129 "config": [ 00:14:20.129 { 00:14:20.129 "method": "bdev_set_options", 00:14:20.129 "params": { 00:14:20.129 "bdev_io_pool_size": 65535, 00:14:20.129 "bdev_io_cache_size": 256, 00:14:20.129 "bdev_auto_examine": true, 00:14:20.129 "iobuf_small_cache_size": 128, 00:14:20.129 "iobuf_large_cache_size": 16 00:14:20.129 } 00:14:20.129 }, 00:14:20.129 { 00:14:20.129 "method": "bdev_raid_set_options", 00:14:20.129 "params": { 00:14:20.129 "process_window_size_kb": 1024 00:14:20.129 } 00:14:20.129 }, 00:14:20.129 { 00:14:20.129 "method": "bdev_iscsi_set_options", 00:14:20.129 "params": { 00:14:20.129 "timeout_sec": 30 00:14:20.129 } 00:14:20.129 }, 00:14:20.129 { 00:14:20.129 "method": "bdev_nvme_set_options", 00:14:20.129 "params": { 00:14:20.129 "action_on_timeout": "none", 00:14:20.129 "timeout_us": 0, 00:14:20.129 "timeout_admin_us": 0, 00:14:20.129 "keep_alive_timeout_ms": 10000, 00:14:20.129 "arbitration_burst": 0, 00:14:20.129 "low_priority_weight": 0, 00:14:20.129 "medium_priority_weight": 0, 00:14:20.129 "high_priority_weight": 0, 00:14:20.129 "nvme_adminq_poll_period_us": 10000, 00:14:20.129 "nvme_ioq_poll_period_us": 0, 00:14:20.129 "io_queue_requests": 0, 00:14:20.129 "delay_cmd_submit": true, 00:14:20.130 "transport_retry_count": 4, 00:14:20.130 "bdev_retry_count": 3, 00:14:20.130 "transport_ack_timeout": 0, 00:14:20.130 "ctrlr_loss_timeout_sec": 0, 00:14:20.130 "reconnect_delay_sec": 0, 00:14:20.130 "fast_io_fail_timeout_sec": 0, 00:14:20.130 "disable_auto_failback": false, 00:14:20.130 "generate_uuids": false, 00:14:20.130 "transport_tos": 0, 00:14:20.130 "nvme_error_stat": false, 00:14:20.130 "rdma_srq_size": 0, 00:14:20.130 "io_path_stat": false, 00:14:20.130 "allow_accel_sequence": false, 00:14:20.130 "rdma_max_cq_size": 0, 00:14:20.130 "rdma_cm_event_timeout_ms": 0, 00:14:20.130 "dhchap_digests": [ 00:14:20.130 "sha256", 00:14:20.130 "sha384", 00:14:20.130 "sha512" 00:14:20.130 ], 00:14:20.130 "dhchap_dhgroups": [ 00:14:20.130 "null", 00:14:20.130 "ffdhe2048", 00:14:20.130 "ffdhe3072", 00:14:20.130 "ffdhe4096", 00:14:20.130 "ffdhe6144", 00:14:20.130 "ffdhe8192" 00:14:20.130 ] 00:14:20.130 } 00:14:20.130 }, 00:14:20.130 { 00:14:20.130 "method": "bdev_nvme_set_hotplug", 00:14:20.130 "params": { 00:14:20.130 "period_us": 100000, 00:14:20.130 "enable": false 00:14:20.130 } 00:14:20.130 }, 00:14:20.130 { 00:14:20.130 "method": "bdev_malloc_create", 00:14:20.130 "params": { 00:14:20.130 "name": "malloc0", 00:14:20.130 "num_blocks": 8192, 00:14:20.130 "block_size": 4096, 00:14:20.130 "physical_block_size": 4096, 00:14:20.130 "uuid": "c346b53f-770a-402a-8884-5cc11fb57adf", 00:14:20.130 "optimal_io_boundary": 0 00:14:20.130 } 00:14:20.130 }, 00:14:20.130 { 00:14:20.130 "method": "bdev_wait_for_examine" 00:14:20.130 } 00:14:20.130 ] 00:14:20.130 }, 00:14:20.130 { 00:14:20.130 "subsystem": "nbd", 00:14:20.130 "config": [] 00:14:20.130 }, 00:14:20.130 { 00:14:20.130 "subsystem": "scheduler", 00:14:20.130 "config": [ 00:14:20.130 { 00:14:20.130 "method": "framework_set_scheduler", 00:14:20.130 "params": { 00:14:20.130 "name": "static" 00:14:20.130 } 00:14:20.130 } 00:14:20.130 ] 00:14:20.130 }, 00:14:20.130 { 00:14:20.130 "subsystem": "nvmf", 00:14:20.130 "config": [ 00:14:20.130 { 00:14:20.130 "method": "nvmf_set_config", 00:14:20.130 "params": { 00:14:20.130 "discovery_filter": "match_any", 00:14:20.130 "admin_cmd_passthru": { 00:14:20.130 "identify_ctrlr": false 00:14:20.130 } 00:14:20.130 } 00:14:20.130 }, 00:14:20.130 { 00:14:20.130 "method": "nvmf_set_max_subsystems", 00:14:20.130 "params": { 00:14:20.130 "max_subsystems": 1024 00:14:20.130 } 00:14:20.130 }, 00:14:20.130 { 00:14:20.130 "method": "nvmf_set_crdt", 00:14:20.130 "params": { 00:14:20.130 "crdt1": 0, 00:14:20.130 "crdt2": 0, 00:14:20.130 "crdt3": 0 00:14:20.130 } 00:14:20.130 }, 00:14:20.130 { 00:14:20.130 "method": "nvmf_create_transport", 00:14:20.130 "params": { 00:14:20.130 "trtype": "TCP", 00:14:20.130 "max_queue_depth": 128, 00:14:20.130 "max_io_qpairs_per_ctrlr": 127, 00:14:20.130 "in_capsule_data_size": 4096, 00:14:20.130 "max_io_size": 131072, 00:14:20.130 "io_unit_size": 131072, 00:14:20.130 "max_aq_depth": 128, 00:14:20.130 "num_shared_buffers": 511, 00:14:20.130 "buf_cache_size": 4294967295, 00:14:20.130 "dif_insert_or_strip": false, 00:14:20.130 "zcopy": false, 00:14:20.130 "c2h_success": false, 00:14:20.130 "sock_priority": 0, 00:14:20.130 "abort_timeout_sec": 1, 00:14:20.130 "ack_timeout": 0, 00:14:20.130 "data_wr_pool_size": 0 00:14:20.130 } 00:14:20.130 }, 00:14:20.130 { 00:14:20.130 "method": "nvmf_create_subsystem", 00:14:20.130 "params": { 00:14:20.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.130 "allow_any_host": false, 00:14:20.130 "serial_number": "SPDK00000000000001", 00:14:20.130 "model_number": "SPDK bdev Controller", 00:14:20.130 "max_namespaces": 10, 00:14:20.130 "min_cntlid": 1, 00:14:20.130 "max_cntlid": 65519, 00:14:20.130 "ana_reporting": false 00:14:20.130 } 00:14:20.130 }, 00:14:20.130 { 00:14:20.130 "method": "nvmf_subsystem_add_host", 00:14:20.130 "params": { 00:14:20.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.130 "host": "nqn.2016-06.io.spdk:host1", 00:14:20.130 "psk": "/tmp/tmp.duiv4dqpxY" 00:14:20.130 } 00:14:20.130 }, 00:14:20.130 { 00:14:20.130 "method": "nvmf_subsystem_add_ns", 00:14:20.130 "params": { 00:14:20.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.130 "namespace": { 00:14:20.130 "nsid": 1, 00:14:20.130 "bdev_name": "malloc0", 00:14:20.130 "nguid": "C346B53F770A402A88845CC11FB57ADF", 00:14:20.130 "uuid": "c346b53f-770a-402a-8884-5cc11fb57adf", 00:14:20.130 "no_auto_visible": false 00:14:20.130 } 00:14:20.130 } 00:14:20.130 }, 00:14:20.130 { 00:14:20.130 "method": "nvmf_subsystem_add_listener", 00:14:20.130 "params": { 00:14:20.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.130 "listen_address": { 00:14:20.130 "trtype": "TCP", 00:14:20.130 "adrfam": "IPv4", 00:14:20.130 "traddr": "10.0.0.2", 00:14:20.130 "trsvcid": "4420" 00:14:20.130 }, 00:14:20.130 "secure_channel": true 00:14:20.130 } 00:14:20.130 } 00:14:20.130 ] 00:14:20.130 } 00:14:20.130 ] 00:14:20.130 }' 00:14:20.130 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:20.388 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:14:20.388 "subsystems": [ 00:14:20.388 { 00:14:20.388 "subsystem": "keyring", 00:14:20.388 "config": [] 00:14:20.388 }, 00:14:20.388 { 00:14:20.388 "subsystem": "iobuf", 00:14:20.388 "config": [ 00:14:20.388 { 00:14:20.388 "method": "iobuf_set_options", 00:14:20.388 "params": { 00:14:20.388 "small_pool_count": 8192, 00:14:20.388 "large_pool_count": 1024, 00:14:20.388 "small_bufsize": 8192, 00:14:20.388 "large_bufsize": 135168 00:14:20.388 } 00:14:20.388 } 00:14:20.388 ] 00:14:20.388 }, 00:14:20.388 { 00:14:20.388 "subsystem": "sock", 00:14:20.388 "config": [ 00:14:20.388 { 00:14:20.388 "method": "sock_set_default_impl", 00:14:20.388 "params": { 00:14:20.388 "impl_name": "uring" 00:14:20.388 } 00:14:20.388 }, 00:14:20.388 { 00:14:20.388 "method": "sock_impl_set_options", 00:14:20.388 "params": { 00:14:20.388 "impl_name": "ssl", 00:14:20.388 "recv_buf_size": 4096, 00:14:20.388 "send_buf_size": 4096, 00:14:20.388 "enable_recv_pipe": true, 00:14:20.388 "enable_quickack": false, 00:14:20.388 "enable_placement_id": 0, 00:14:20.388 "enable_zerocopy_send_server": true, 00:14:20.388 "enable_zerocopy_send_client": false, 00:14:20.388 "zerocopy_threshold": 0, 00:14:20.388 "tls_version": 0, 00:14:20.388 "enable_ktls": false 00:14:20.388 } 00:14:20.388 }, 00:14:20.388 { 00:14:20.388 "method": "sock_impl_set_options", 00:14:20.388 "params": { 00:14:20.388 "impl_name": "posix", 00:14:20.388 "recv_buf_size": 2097152, 00:14:20.388 "send_buf_size": 2097152, 00:14:20.388 "enable_recv_pipe": true, 00:14:20.388 "enable_quickack": false, 00:14:20.388 "enable_placement_id": 0, 00:14:20.388 "enable_zerocopy_send_server": true, 00:14:20.388 "enable_zerocopy_send_client": false, 00:14:20.388 "zerocopy_threshold": 0, 00:14:20.388 "tls_version": 0, 00:14:20.388 "enable_ktls": false 00:14:20.388 } 00:14:20.388 }, 00:14:20.388 { 00:14:20.388 "method": "sock_impl_set_options", 00:14:20.388 "params": { 00:14:20.388 "impl_name": "uring", 00:14:20.388 "recv_buf_size": 2097152, 00:14:20.388 "send_buf_size": 2097152, 00:14:20.388 "enable_recv_pipe": true, 00:14:20.388 "enable_quickack": false, 00:14:20.388 "enable_placement_id": 0, 00:14:20.388 "enable_zerocopy_send_server": false, 00:14:20.388 "enable_zerocopy_send_client": false, 00:14:20.388 "zerocopy_threshold": 0, 00:14:20.388 "tls_version": 0, 00:14:20.388 "enable_ktls": false 00:14:20.388 } 00:14:20.388 } 00:14:20.388 ] 00:14:20.388 }, 00:14:20.388 { 00:14:20.388 "subsystem": "vmd", 00:14:20.388 "config": [] 00:14:20.388 }, 00:14:20.388 { 00:14:20.388 "subsystem": "accel", 00:14:20.388 "config": [ 00:14:20.388 { 00:14:20.388 "method": "accel_set_options", 00:14:20.388 "params": { 00:14:20.388 "small_cache_size": 128, 00:14:20.388 "large_cache_size": 16, 00:14:20.388 "task_count": 2048, 00:14:20.388 "sequence_count": 2048, 00:14:20.388 "buf_count": 2048 00:14:20.388 } 00:14:20.388 } 00:14:20.388 ] 00:14:20.388 }, 00:14:20.388 { 00:14:20.388 "subsystem": "bdev", 00:14:20.388 "config": [ 00:14:20.388 { 00:14:20.388 "method": "bdev_set_options", 00:14:20.388 "params": { 00:14:20.388 "bdev_io_pool_size": 65535, 00:14:20.388 "bdev_io_cache_size": 256, 00:14:20.388 "bdev_auto_examine": true, 00:14:20.388 "iobuf_small_cache_size": 128, 00:14:20.388 "iobuf_large_cache_size": 16 00:14:20.388 } 00:14:20.388 }, 00:14:20.388 { 00:14:20.388 "method": "bdev_raid_set_options", 00:14:20.388 "params": { 00:14:20.388 "process_window_size_kb": 1024 00:14:20.388 } 00:14:20.388 }, 00:14:20.388 { 00:14:20.388 "method": "bdev_iscsi_set_options", 00:14:20.388 "params": { 00:14:20.388 "timeout_sec": 30 00:14:20.388 } 00:14:20.388 }, 00:14:20.388 { 00:14:20.388 "method": "bdev_nvme_set_options", 00:14:20.388 "params": { 00:14:20.388 "action_on_timeout": "none", 00:14:20.388 "timeout_us": 0, 00:14:20.388 "timeout_admin_us": 0, 00:14:20.388 "keep_alive_timeout_ms": 10000, 00:14:20.388 "arbitration_burst": 0, 00:14:20.388 "low_priority_weight": 0, 00:14:20.388 "medium_priority_weight": 0, 00:14:20.388 "high_priority_weight": 0, 00:14:20.388 "nvme_adminq_poll_period_us": 10000, 00:14:20.388 "nvme_ioq_poll_period_us": 0, 00:14:20.388 "io_queue_requests": 512, 00:14:20.388 "delay_cmd_submit": true, 00:14:20.388 "transport_retry_count": 4, 00:14:20.388 "bdev_retry_count": 3, 00:14:20.388 "transport_ack_timeout": 0, 00:14:20.388 "ctrlr_loss_timeout_sec": 0, 00:14:20.388 "reconnect_delay_sec": 0, 00:14:20.388 "fast_io_fail_timeout_sec": 0, 00:14:20.388 "disable_auto_failback": false, 00:14:20.388 "generate_uuids": false, 00:14:20.388 "transport_tos": 0, 00:14:20.388 "nvme_error_stat": false, 00:14:20.388 "rdma_srq_size": 0, 00:14:20.388 "io_path_stat": false, 00:14:20.388 "allow_accel_sequence": false, 00:14:20.388 "rdma_max_cq_size": 0, 00:14:20.388 "rdma_cm_event_timeout_ms": 0, 00:14:20.388 "dhchap_digests": [ 00:14:20.388 "sha256", 00:14:20.388 "sha384", 00:14:20.388 "sha512" 00:14:20.388 ], 00:14:20.388 "dhchap_dhgroups": [ 00:14:20.388 "null", 00:14:20.388 "ffdhe2048", 00:14:20.388 "ffdhe3072", 00:14:20.388 "ffdhe4096", 00:14:20.388 "ffdhe6144", 00:14:20.388 "ffdhe8192" 00:14:20.388 ] 00:14:20.388 } 00:14:20.388 }, 00:14:20.388 { 00:14:20.388 "method": "bdev_nvme_attach_controller", 00:14:20.388 "params": { 00:14:20.388 "name": "TLSTEST", 00:14:20.388 "trtype": "TCP", 00:14:20.388 "adrfam": "IPv4", 00:14:20.388 "traddr": "10.0.0.2", 00:14:20.388 "trsvcid": "4420", 00:14:20.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:20.388 "prchk_reftag": false, 00:14:20.388 "prchk_guard": false, 00:14:20.388 "ctrlr_loss_timeout_sec": 0, 00:14:20.388 "reconnect_delay_sec": 0, 00:14:20.388 "fast_io_fail_timeout_sec": 0, 00:14:20.388 "psk": "/tmp/tmp.duiv4dqpxY", 00:14:20.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:20.388 "hdgst": false, 00:14:20.388 "ddgst": false 00:14:20.388 } 00:14:20.388 }, 00:14:20.388 { 00:14:20.388 "method": "bdev_nvme_set_hotplug", 00:14:20.388 "params": { 00:14:20.388 "period_us": 100000, 00:14:20.388 "enable": false 00:14:20.388 } 00:14:20.388 }, 00:14:20.388 { 00:14:20.388 "method": "bdev_wait_for_examine" 00:14:20.388 } 00:14:20.388 ] 00:14:20.388 }, 00:14:20.388 { 00:14:20.388 "subsystem": "nbd", 00:14:20.388 "config": [] 00:14:20.388 } 00:14:20.388 ] 00:14:20.388 }' 00:14:20.388 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 73953 00:14:20.388 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73953 ']' 00:14:20.388 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73953 00:14:20.388 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:20.388 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:20.388 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73953 00:14:20.647 killing process with pid 73953 00:14:20.647 Received shutdown signal, test time was about 10.000000 seconds 00:14:20.647 00:14:20.647 Latency(us) 00:14:20.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.647 =================================================================================================================== 00:14:20.647 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:20.647 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:20.647 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:20.647 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73953' 00:14:20.647 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73953 00:14:20.647 [2024-07-12 12:38:46.478691] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:20.647 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73953 00:14:20.905 12:38:46 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 73893 00:14:20.905 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73893 ']' 00:14:20.905 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73893 00:14:20.905 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:20.905 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:20.905 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73893 00:14:20.905 killing process with pid 73893 00:14:20.905 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:20.905 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:20.905 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73893' 00:14:20.905 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73893 00:14:20.905 [2024-07-12 12:38:46.771380] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:20.905 12:38:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73893 00:14:21.164 12:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:14:21.164 12:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.164 12:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:21.164 12:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.164 12:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:14:21.164 "subsystems": [ 00:14:21.164 { 00:14:21.164 "subsystem": "keyring", 00:14:21.164 "config": [] 00:14:21.164 }, 00:14:21.164 { 00:14:21.164 "subsystem": "iobuf", 00:14:21.164 "config": [ 00:14:21.164 { 00:14:21.164 "method": "iobuf_set_options", 00:14:21.164 "params": { 00:14:21.164 "small_pool_count": 8192, 00:14:21.164 "large_pool_count": 1024, 00:14:21.164 "small_bufsize": 8192, 00:14:21.164 "large_bufsize": 135168 00:14:21.164 } 00:14:21.164 } 00:14:21.164 ] 00:14:21.164 }, 00:14:21.164 { 00:14:21.164 "subsystem": "sock", 00:14:21.164 "config": [ 00:14:21.164 { 00:14:21.164 "method": "sock_set_default_impl", 00:14:21.164 "params": { 00:14:21.164 "impl_name": "uring" 00:14:21.164 } 00:14:21.164 }, 00:14:21.164 { 00:14:21.164 "method": "sock_impl_set_options", 00:14:21.164 "params": { 00:14:21.164 "impl_name": "ssl", 00:14:21.164 "recv_buf_size": 4096, 00:14:21.164 "send_buf_size": 4096, 00:14:21.164 "enable_recv_pipe": true, 00:14:21.164 "enable_quickack": false, 00:14:21.164 "enable_placement_id": 0, 00:14:21.164 "enable_zerocopy_send_server": true, 00:14:21.164 "enable_zerocopy_send_client": false, 00:14:21.164 "zerocopy_threshold": 0, 00:14:21.164 "tls_version": 0, 00:14:21.164 "enable_ktls": false 00:14:21.164 } 00:14:21.164 }, 00:14:21.164 { 00:14:21.164 "method": "sock_impl_set_options", 00:14:21.164 "params": { 00:14:21.164 "impl_name": "posix", 00:14:21.164 "recv_buf_size": 2097152, 00:14:21.164 "send_buf_size": 2097152, 00:14:21.164 "enable_recv_pipe": true, 00:14:21.164 "enable_quickack": false, 00:14:21.164 "enable_placement_id": 0, 00:14:21.164 "enable_zerocopy_send_server": true, 00:14:21.164 "enable_zerocopy_send_client": false, 00:14:21.164 "zerocopy_threshold": 0, 00:14:21.164 "tls_version": 0, 00:14:21.164 "enable_ktls": false 00:14:21.164 } 00:14:21.164 }, 00:14:21.164 { 00:14:21.164 "method": "sock_impl_set_options", 00:14:21.164 "params": { 00:14:21.164 "impl_name": "uring", 00:14:21.164 "recv_buf_size": 2097152, 00:14:21.164 "send_buf_size": 2097152, 00:14:21.164 "enable_recv_pipe": true, 00:14:21.164 "enable_quickack": false, 00:14:21.164 "enable_placement_id": 0, 00:14:21.164 "enable_zerocopy_send_server": false, 00:14:21.164 "enable_zerocopy_send_client": false, 00:14:21.164 "zerocopy_threshold": 0, 00:14:21.164 "tls_version": 0, 00:14:21.164 "enable_ktls": false 00:14:21.164 } 00:14:21.164 } 00:14:21.164 ] 00:14:21.164 }, 00:14:21.164 { 00:14:21.164 "subsystem": "vmd", 00:14:21.164 "config": [] 00:14:21.164 }, 00:14:21.164 { 00:14:21.164 "subsystem": "accel", 00:14:21.164 "config": [ 00:14:21.164 { 00:14:21.164 "method": "accel_set_options", 00:14:21.164 "params": { 00:14:21.164 "small_cache_size": 128, 00:14:21.164 "large_cache_size": 16, 00:14:21.164 "task_count": 2048, 00:14:21.164 "sequence_count": 2048, 00:14:21.164 "buf_count": 2048 00:14:21.164 } 00:14:21.164 } 00:14:21.164 ] 00:14:21.164 }, 00:14:21.164 { 00:14:21.164 "subsystem": "bdev", 00:14:21.164 "config": [ 00:14:21.164 { 00:14:21.164 "method": "bdev_set_options", 00:14:21.164 "params": { 00:14:21.164 "bdev_io_pool_size": 65535, 00:14:21.164 "bdev_io_cache_size": 256, 00:14:21.164 "bdev_auto_examine": true, 00:14:21.164 "iobuf_small_cache_size": 128, 00:14:21.164 "iobuf_large_cache_size": 16 00:14:21.164 } 00:14:21.164 }, 00:14:21.164 { 00:14:21.164 "method": "bdev_raid_set_options", 00:14:21.164 "params": { 00:14:21.164 "process_window_size_kb": 1024 00:14:21.164 } 00:14:21.164 }, 00:14:21.164 { 00:14:21.164 "method": "bdev_iscsi_set_options", 00:14:21.164 "params": { 00:14:21.164 "timeout_sec": 30 00:14:21.164 } 00:14:21.164 }, 00:14:21.164 { 00:14:21.164 "method": "bdev_nvme_set_options", 00:14:21.165 "params": { 00:14:21.165 "action_on_timeout": "none", 00:14:21.165 "timeout_us": 0, 00:14:21.165 "timeout_admin_us": 0, 00:14:21.165 "keep_alive_timeout_ms": 10000, 00:14:21.165 "arbitration_burst": 0, 00:14:21.165 "low_priority_weight": 0, 00:14:21.165 "medium_priority_weight": 0, 00:14:21.165 "high_priority_weight": 0, 00:14:21.165 "nvme_adminq_poll_period_us": 10000, 00:14:21.165 "nvme_ioq_poll_period_us": 0, 00:14:21.165 "io_queue_requests": 0, 00:14:21.165 "delay_cmd_submit": true, 00:14:21.165 "transport_retry_count": 4, 00:14:21.165 "bdev_retry_count": 3, 00:14:21.165 "transport_ack_timeout": 0, 00:14:21.165 "ctrlr_loss_timeout_sec": 0, 00:14:21.165 "reconnect_delay_sec": 0, 00:14:21.165 "fast_io_fail_timeout_sec": 0, 00:14:21.165 "disable_auto_failback": false, 00:14:21.165 "generate_uuids": false, 00:14:21.165 "transport_tos": 0, 00:14:21.165 "nvme_error_stat": false, 00:14:21.165 "rdma_srq_size": 0, 00:14:21.165 "io_path_stat": false, 00:14:21.165 "allow_accel_sequence": false, 00:14:21.165 "rdma_max_cq_size": 0, 00:14:21.165 "rdma_cm_event_timeout_ms": 0, 00:14:21.165 "dhchap_digests": [ 00:14:21.165 "sha256", 00:14:21.165 "sha384", 00:14:21.165 "sha512" 00:14:21.165 ], 00:14:21.165 "dhchap_dhgroups": [ 00:14:21.165 "null", 00:14:21.165 "ffdhe2048", 00:14:21.165 "ffdhe3072", 00:14:21.165 "ffdhe4096", 00:14:21.165 "ffdhe6144", 00:14:21.165 "ffdhe8192" 00:14:21.165 ] 00:14:21.165 } 00:14:21.165 }, 00:14:21.165 { 00:14:21.165 "method": "bdev_nvme_set_hotplug", 00:14:21.165 "params": { 00:14:21.165 "period_us": 100000, 00:14:21.165 "enable": false 00:14:21.165 } 00:14:21.165 }, 00:14:21.165 { 00:14:21.165 "method": "bdev_malloc_create", 00:14:21.165 "params": { 00:14:21.165 "name": "malloc0", 00:14:21.165 "num_blocks": 8192, 00:14:21.165 "block_size": 4096, 00:14:21.165 "physical_block_size": 4096, 00:14:21.165 "uuid": "c346b53f-770a-402a-8884-5cc11fb57adf", 00:14:21.165 "optimal_io_boundary": 0 00:14:21.165 } 00:14:21.165 }, 00:14:21.165 { 00:14:21.165 "method": "bdev_wait_for_examine" 00:14:21.165 } 00:14:21.165 ] 00:14:21.165 }, 00:14:21.165 { 00:14:21.165 "subsystem": "nbd", 00:14:21.165 "config": [] 00:14:21.165 }, 00:14:21.165 { 00:14:21.165 "subsystem": "scheduler", 00:14:21.165 "config": [ 00:14:21.165 { 00:14:21.165 "method": "framework_set_scheduler", 00:14:21.165 "params": { 00:14:21.165 "name": "static" 00:14:21.165 } 00:14:21.165 } 00:14:21.165 ] 00:14:21.165 }, 00:14:21.165 { 00:14:21.165 "subsystem": "nvmf", 00:14:21.165 "config": [ 00:14:21.165 { 00:14:21.165 "method": "nvmf_set_config", 00:14:21.165 "params": { 00:14:21.165 "discovery_filter": "match_any", 00:14:21.165 "admin_cmd_passthru": { 00:14:21.165 "identify_ctrlr": false 00:14:21.165 } 00:14:21.165 } 00:14:21.165 }, 00:14:21.165 { 00:14:21.165 "method": "nvmf_set_max_subsystems", 00:14:21.165 "params": { 00:14:21.165 "max_subsystems": 1024 00:14:21.165 } 00:14:21.165 }, 00:14:21.165 { 00:14:21.165 "method": "nvmf_set_crdt", 00:14:21.165 "params": { 00:14:21.165 "crdt1": 0, 00:14:21.165 "crdt2": 0, 00:14:21.165 "crdt3": 0 00:14:21.165 } 00:14:21.165 }, 00:14:21.165 { 00:14:21.165 "method": "nvmf_create_transport", 00:14:21.165 "params": { 00:14:21.165 "trtype": "TCP", 00:14:21.165 "max_queue_depth": 128, 00:14:21.165 "max_io_qpairs_per_ctrlr": 127, 00:14:21.165 "in_capsule_data_size": 4096, 00:14:21.165 "max_io_size": 131072, 00:14:21.165 "io_unit_size": 131072, 00:14:21.165 "max_aq_depth": 128, 00:14:21.165 "num_shared_buffers": 511, 00:14:21.165 "buf_cache_size": 4294967295, 00:14:21.165 "dif_insert_or_strip": false, 00:14:21.165 "zcopy": false, 00:14:21.165 "c2h_success": false, 00:14:21.165 "sock_priority": 0, 00:14:21.165 "abort_timeout_sec": 1, 00:14:21.165 "ack_timeout": 0, 00:14:21.165 "data_wr_pool_size": 0 00:14:21.165 } 00:14:21.165 }, 00:14:21.165 { 00:14:21.165 "method": "nvmf_create_subsystem", 00:14:21.165 "params": { 00:14:21.165 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.165 "allow_any_host": false, 00:14:21.165 "serial_number": "SPDK00000000000001", 00:14:21.165 "model_number": "SPDK bdev Controller", 00:14:21.165 "max_namespaces": 10, 00:14:21.165 "min_cntlid": 1, 00:14:21.165 "max_cntlid": 65519, 00:14:21.165 "ana_reporting": false 00:14:21.165 } 00:14:21.165 }, 00:14:21.165 { 00:14:21.165 "method": "nvmf_subsystem_add_host", 00:14:21.165 "params": { 00:14:21.165 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.165 "host": "nqn.2016-06.io.spdk:host1", 00:14:21.165 "psk": "/tmp/tmp.duiv4dqpxY" 00:14:21.165 } 00:14:21.165 }, 00:14:21.165 { 00:14:21.165 "method": "nvmf_subsystem_add_ns", 00:14:21.165 "params": { 00:14:21.165 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.165 "namespace": { 00:14:21.165 "nsid": 1, 00:14:21.165 "bdev_name": "malloc0", 00:14:21.165 "nguid": "C346B53F770A402A88845CC11FB57ADF", 00:14:21.165 "uuid": "c346b53f-770a-402a-8884-5cc11fb57adf", 00:14:21.165 "no_auto_visible": false 00:14:21.165 } 00:14:21.165 } 00:14:21.165 }, 00:14:21.165 { 00:14:21.165 "method": "nvmf_subsystem_add_listener", 00:14:21.165 "params": { 00:14:21.165 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.165 "listen_address": { 00:14:21.165 "trtype": "TCP", 00:14:21.165 "adrfam": "IPv4", 00:14:21.165 "traddr": "10.0.0.2", 00:14:21.165 "trsvcid": "4420" 00:14:21.165 }, 00:14:21.165 "secure_channel": true 00:14:21.165 } 00:14:21.165 } 00:14:21.165 ] 00:14:21.165 } 00:14:21.165 ] 00:14:21.165 }' 00:14:21.165 12:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=73997 00:14:21.165 12:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 73997 00:14:21.165 12:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:14:21.165 12:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 73997 ']' 00:14:21.165 12:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.165 12:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:21.165 12:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.165 12:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:21.165 12:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:21.165 [2024-07-12 12:38:47.079648] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:14:21.165 [2024-07-12 12:38:47.079758] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.165 [2024-07-12 12:38:47.220271] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.424 [2024-07-12 12:38:47.329887] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.424 [2024-07-12 12:38:47.329953] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.424 [2024-07-12 12:38:47.329981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.424 [2024-07-12 12:38:47.329989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.424 [2024-07-12 12:38:47.329996] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.424 [2024-07-12 12:38:47.330093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.684 [2024-07-12 12:38:47.502031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:21.685 [2024-07-12 12:38:47.571080] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.685 [2024-07-12 12:38:47.587019] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:21.685 [2024-07-12 12:38:47.603014] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:21.685 [2024-07-12 12:38:47.603235] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.256 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:22.256 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:22.256 12:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.256 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:22.256 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:22.256 12:38:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.256 12:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=74029 00:14:22.256 12:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 74029 /var/tmp/bdevperf.sock 00:14:22.256 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74029 ']' 00:14:22.256 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:22.256 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:22.256 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:22.256 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:22.256 12:38:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:22.256 12:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:14:22.256 12:38:48 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:14:22.256 "subsystems": [ 00:14:22.256 { 00:14:22.256 "subsystem": "keyring", 00:14:22.256 "config": [] 00:14:22.256 }, 00:14:22.256 { 00:14:22.256 "subsystem": "iobuf", 00:14:22.256 "config": [ 00:14:22.256 { 00:14:22.256 "method": "iobuf_set_options", 00:14:22.256 "params": { 00:14:22.256 "small_pool_count": 8192, 00:14:22.256 "large_pool_count": 1024, 00:14:22.256 "small_bufsize": 8192, 00:14:22.256 "large_bufsize": 135168 00:14:22.256 } 00:14:22.256 } 00:14:22.256 ] 00:14:22.256 }, 00:14:22.256 { 00:14:22.256 "subsystem": "sock", 00:14:22.256 "config": [ 00:14:22.256 { 00:14:22.256 "method": "sock_set_default_impl", 00:14:22.256 "params": { 00:14:22.256 "impl_name": "uring" 00:14:22.256 } 00:14:22.256 }, 00:14:22.256 { 00:14:22.256 "method": "sock_impl_set_options", 00:14:22.256 "params": { 00:14:22.256 "impl_name": "ssl", 00:14:22.256 "recv_buf_size": 4096, 00:14:22.256 "send_buf_size": 4096, 00:14:22.256 "enable_recv_pipe": true, 00:14:22.256 "enable_quickack": false, 00:14:22.256 "enable_placement_id": 0, 00:14:22.256 "enable_zerocopy_send_server": true, 00:14:22.256 "enable_zerocopy_send_client": false, 00:14:22.256 "zerocopy_threshold": 0, 00:14:22.256 "tls_version": 0, 00:14:22.256 "enable_ktls": false 00:14:22.256 } 00:14:22.256 }, 00:14:22.256 { 00:14:22.256 "method": "sock_impl_set_options", 00:14:22.256 "params": { 00:14:22.256 "impl_name": "posix", 00:14:22.256 "recv_buf_size": 2097152, 00:14:22.256 "send_buf_size": 2097152, 00:14:22.256 "enable_recv_pipe": true, 00:14:22.256 "enable_quickack": false, 00:14:22.256 "enable_placement_id": 0, 00:14:22.256 "enable_zerocopy_send_server": true, 00:14:22.256 "enable_zerocopy_send_client": false, 00:14:22.256 "zerocopy_threshold": 0, 00:14:22.256 "tls_version": 0, 00:14:22.256 "enable_ktls": false 00:14:22.256 } 00:14:22.256 }, 00:14:22.256 { 00:14:22.256 "method": "sock_impl_set_options", 00:14:22.256 "params": { 00:14:22.256 "impl_name": "uring", 00:14:22.256 "recv_buf_size": 2097152, 00:14:22.256 "send_buf_size": 2097152, 00:14:22.256 "enable_recv_pipe": true, 00:14:22.256 "enable_quickack": false, 00:14:22.256 "enable_placement_id": 0, 00:14:22.256 "enable_zerocopy_send_server": false, 00:14:22.256 "enable_zerocopy_send_client": false, 00:14:22.256 "zerocopy_threshold": 0, 00:14:22.256 "tls_version": 0, 00:14:22.256 "enable_ktls": false 00:14:22.256 } 00:14:22.256 } 00:14:22.256 ] 00:14:22.256 }, 00:14:22.256 { 00:14:22.256 "subsystem": "vmd", 00:14:22.256 "config": [] 00:14:22.256 }, 00:14:22.256 { 00:14:22.256 "subsystem": "accel", 00:14:22.256 "config": [ 00:14:22.256 { 00:14:22.256 "method": "accel_set_options", 00:14:22.256 "params": { 00:14:22.256 "small_cache_size": 128, 00:14:22.256 "large_cache_size": 16, 00:14:22.256 "task_count": 2048, 00:14:22.256 "sequence_count": 2048, 00:14:22.256 "buf_count": 2048 00:14:22.256 } 00:14:22.256 } 00:14:22.256 ] 00:14:22.256 }, 00:14:22.256 { 00:14:22.256 "subsystem": "bdev", 00:14:22.256 "config": [ 00:14:22.256 { 00:14:22.256 "method": "bdev_set_options", 00:14:22.256 "params": { 00:14:22.256 "bdev_io_pool_size": 65535, 00:14:22.256 "bdev_io_cache_size": 256, 00:14:22.256 "bdev_auto_examine": true, 00:14:22.256 "iobuf_small_cache_size": 128, 00:14:22.256 "iobuf_large_cache_size": 16 00:14:22.256 } 00:14:22.256 }, 00:14:22.256 { 00:14:22.256 "method": "bdev_raid_set_options", 00:14:22.256 "params": { 00:14:22.257 "process_window_size_kb": 1024 00:14:22.257 } 00:14:22.257 }, 00:14:22.257 { 00:14:22.257 "method": "bdev_iscsi_set_options", 00:14:22.257 "params": { 00:14:22.257 "timeout_sec": 30 00:14:22.257 } 00:14:22.257 }, 00:14:22.257 { 00:14:22.257 "method": "bdev_nvme_set_options", 00:14:22.257 "params": { 00:14:22.257 "action_on_timeout": "none", 00:14:22.257 "timeout_us": 0, 00:14:22.257 "timeout_admin_us": 0, 00:14:22.257 "keep_alive_timeout_ms": 10000, 00:14:22.257 "arbitration_burst": 0, 00:14:22.257 "low_priority_weight": 0, 00:14:22.257 "medium_priority_weight": 0, 00:14:22.257 "high_priority_weight": 0, 00:14:22.257 "nvme_adminq_poll_period_us": 10000, 00:14:22.257 "nvme_ioq_poll_period_us": 0, 00:14:22.257 "io_queue_requests": 512, 00:14:22.257 "delay_cmd_submit": true, 00:14:22.257 "transport_retry_count": 4, 00:14:22.257 "bdev_retry_count": 3, 00:14:22.257 "transport_ack_timeout": 0, 00:14:22.257 "ctrlr_loss_timeout_sec": 0, 00:14:22.257 "reconnect_delay_sec": 0, 00:14:22.257 "fast_io_fail_timeout_sec": 0, 00:14:22.257 "disable_auto_failback": false, 00:14:22.257 "generate_uuids": false, 00:14:22.257 "transport_tos": 0, 00:14:22.257 "nvme_error_stat": false, 00:14:22.257 "rdma_srq_size": 0, 00:14:22.257 "io_path_stat": false, 00:14:22.257 "allow_accel_sequence": false, 00:14:22.257 "rdma_max_cq_size": 0, 00:14:22.257 "rdma_cm_event_timeout_ms": 0, 00:14:22.257 "dhchap_digests": [ 00:14:22.257 "sha256", 00:14:22.257 "sha384", 00:14:22.257 "sha512" 00:14:22.257 ], 00:14:22.257 "dhchap_dhgroups": [ 00:14:22.257 "null", 00:14:22.257 "ffdhe2048", 00:14:22.257 "ffdhe3072", 00:14:22.257 "ffdhe4096", 00:14:22.257 "ffdhe6144", 00:14:22.257 "ffdhe8192" 00:14:22.257 ] 00:14:22.257 } 00:14:22.257 }, 00:14:22.257 { 00:14:22.257 "method": "bdev_nvme_attach_controller", 00:14:22.257 "params": { 00:14:22.257 "name": "TLSTEST", 00:14:22.257 "trtype": "TCP", 00:14:22.257 "adrfam": "IPv4", 00:14:22.257 "traddr": "10.0.0.2", 00:14:22.257 "trsvcid": "4420", 00:14:22.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.257 "prchk_reftag": false, 00:14:22.257 "prchk_guard": false, 00:14:22.257 "ctrlr_loss_timeout_sec": 0, 00:14:22.257 "reconnect_delay_sec": 0, 00:14:22.257 "fast_io_fail_timeout_sec": 0, 00:14:22.257 "psk": "/tmp/tmp.duiv4dqpxY", 00:14:22.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:22.257 "hdgst": false, 00:14:22.257 "ddgst": false 00:14:22.257 } 00:14:22.257 }, 00:14:22.257 { 00:14:22.257 "method": "bdev_nvme_set_hotplug", 00:14:22.257 "params": { 00:14:22.257 "period_us": 100000, 00:14:22.257 "enable": false 00:14:22.257 } 00:14:22.257 }, 00:14:22.257 { 00:14:22.257 "method": "bdev_wait_for_examine" 00:14:22.257 } 00:14:22.257 ] 00:14:22.257 }, 00:14:22.257 { 00:14:22.257 "subsystem": "nbd", 00:14:22.257 "config": [] 00:14:22.257 } 00:14:22.257 ] 00:14:22.257 }' 00:14:22.257 [2024-07-12 12:38:48.151309] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:14:22.257 [2024-07-12 12:38:48.151492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74029 ] 00:14:22.257 [2024-07-12 12:38:48.294942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.515 [2024-07-12 12:38:48.458185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:22.772 [2024-07-12 12:38:48.598782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:22.772 [2024-07-12 12:38:48.651993] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:22.772 [2024-07-12 12:38:48.652186] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:23.338 12:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.338 12:38:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:23.338 12:38:49 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:23.338 Running I/O for 10 seconds... 00:14:33.339 00:14:33.339 Latency(us) 00:14:33.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.339 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:33.339 Verification LBA range: start 0x0 length 0x2000 00:14:33.339 TLSTESTn1 : 10.02 4019.12 15.70 0.00 0.00 31783.68 7983.48 33602.09 00:14:33.339 =================================================================================================================== 00:14:33.339 Total : 4019.12 15.70 0.00 0.00 31783.68 7983.48 33602.09 00:14:33.339 0 00:14:33.339 12:38:59 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:33.339 12:38:59 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 74029 00:14:33.339 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74029 ']' 00:14:33.339 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74029 00:14:33.339 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:33.339 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:33.339 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74029 00:14:33.339 killing process with pid 74029 00:14:33.339 Received shutdown signal, test time was about 10.000000 seconds 00:14:33.339 00:14:33.339 Latency(us) 00:14:33.339 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.339 =================================================================================================================== 00:14:33.339 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:33.339 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:14:33.339 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:14:33.339 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74029' 00:14:33.339 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74029 00:14:33.339 [2024-07-12 12:38:59.341294] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:33.339 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74029 00:14:33.597 12:38:59 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 73997 00:14:33.598 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 73997 ']' 00:14:33.598 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 73997 00:14:33.598 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:33.598 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:33.598 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 73997 00:14:33.598 killing process with pid 73997 00:14:33.598 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:33.598 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:33.598 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 73997' 00:14:33.598 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 73997 00:14:33.598 [2024-07-12 12:38:59.637849] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:33.598 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 73997 00:14:33.856 12:38:59 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:14:33.856 12:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:33.856 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:33.856 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.856 12:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74168 00:14:33.856 12:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:33.856 12:38:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74168 00:14:33.856 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74168 ']' 00:14:33.856 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.856 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.856 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.856 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.856 12:38:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:34.114 [2024-07-12 12:38:59.938623] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:14:34.114 [2024-07-12 12:38:59.938717] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.114 [2024-07-12 12:39:00.074590] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.114 [2024-07-12 12:39:00.187254] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.372 [2024-07-12 12:39:00.187658] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.372 [2024-07-12 12:39:00.187803] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:34.372 [2024-07-12 12:39:00.187819] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:34.372 [2024-07-12 12:39:00.187830] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.372 [2024-07-12 12:39:00.187862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.372 [2024-07-12 12:39:00.245048] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:34.936 12:39:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.936 12:39:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:34.936 12:39:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:34.936 12:39:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:34.936 12:39:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:34.936 12:39:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:34.936 12:39:00 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.duiv4dqpxY 00:14:34.937 12:39:00 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.duiv4dqpxY 00:14:34.937 12:39:00 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:35.195 [2024-07-12 12:39:01.149550] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.195 12:39:01 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:35.454 12:39:01 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:35.712 [2024-07-12 12:39:01.701649] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:35.712 [2024-07-12 12:39:01.701996] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.712 12:39:01 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:35.970 malloc0 00:14:35.970 12:39:01 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:36.227 12:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.duiv4dqpxY 00:14:36.485 [2024-07-12 12:39:02.474231] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:36.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:36.485 12:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=74217 00:14:36.485 12:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:36.485 12:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:36.485 12:39:02 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 74217 /var/tmp/bdevperf.sock 00:14:36.485 12:39:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74217 ']' 00:14:36.485 12:39:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:36.485 12:39:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:36.485 12:39:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:36.485 12:39:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:36.485 12:39:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:36.485 [2024-07-12 12:39:02.540510] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:14:36.485 [2024-07-12 12:39:02.540782] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74217 ] 00:14:36.742 [2024-07-12 12:39:02.676181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.742 [2024-07-12 12:39:02.815866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.000 [2024-07-12 12:39:02.872977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:37.564 12:39:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:37.564 12:39:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:37.564 12:39:03 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.duiv4dqpxY 00:14:37.822 12:39:03 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:38.083 [2024-07-12 12:39:03.995499] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:38.083 nvme0n1 00:14:38.083 12:39:04 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:38.340 Running I/O for 1 seconds... 00:14:39.273 00:14:39.273 Latency(us) 00:14:39.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.273 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:39.273 Verification LBA range: start 0x0 length 0x2000 00:14:39.273 nvme0n1 : 1.03 3824.70 14.94 0.00 0.00 32998.94 9592.09 21924.77 00:14:39.273 =================================================================================================================== 00:14:39.273 Total : 3824.70 14.94 0.00 0.00 32998.94 9592.09 21924.77 00:14:39.273 0 00:14:39.273 12:39:05 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 74217 00:14:39.273 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74217 ']' 00:14:39.273 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74217 00:14:39.273 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:39.273 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:39.273 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74217 00:14:39.273 killing process with pid 74217 00:14:39.273 Received shutdown signal, test time was about 1.000000 seconds 00:14:39.273 00:14:39.273 Latency(us) 00:14:39.273 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.273 =================================================================================================================== 00:14:39.273 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:39.273 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:39.273 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:39.273 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74217' 00:14:39.273 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74217 00:14:39.273 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74217 00:14:39.531 12:39:05 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 74168 00:14:39.531 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74168 ']' 00:14:39.531 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74168 00:14:39.531 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:39.531 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:39.531 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74168 00:14:39.531 killing process with pid 74168 00:14:39.531 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:39.531 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:39.531 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74168' 00:14:39.531 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74168 00:14:39.531 [2024-07-12 12:39:05.592581] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:39.531 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74168 00:14:39.789 12:39:05 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:14:39.789 12:39:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:39.789 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:39.789 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.789 12:39:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74269 00:14:39.789 12:39:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:39.789 12:39:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74269 00:14:39.789 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74269 ']' 00:14:39.789 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.789 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:39.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.789 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.789 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:39.789 12:39:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.047 [2024-07-12 12:39:05.896321] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:14:40.047 [2024-07-12 12:39:05.896429] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.047 [2024-07-12 12:39:06.031359] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.305 [2024-07-12 12:39:06.154698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.305 [2024-07-12 12:39:06.154779] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.305 [2024-07-12 12:39:06.154790] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.305 [2024-07-12 12:39:06.154799] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.305 [2024-07-12 12:39:06.154806] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.305 [2024-07-12 12:39:06.154836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.305 [2024-07-12 12:39:06.210864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:40.872 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:40.872 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:40.872 12:39:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.872 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:40.872 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.130 12:39:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.130 12:39:06 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:14:41.130 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.130 12:39:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.130 [2024-07-12 12:39:06.972229] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.130 malloc0 00:14:41.130 [2024-07-12 12:39:07.004539] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:41.130 [2024-07-12 12:39:07.004764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.130 12:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.130 12:39:07 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=74306 00:14:41.130 12:39:07 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:14:41.130 12:39:07 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 74306 /var/tmp/bdevperf.sock 00:14:41.130 12:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74306 ']' 00:14:41.130 12:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:41.130 12:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:41.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:41.130 12:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:41.130 12:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:41.130 12:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:41.130 [2024-07-12 12:39:07.082101] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:14:41.130 [2024-07-12 12:39:07.082180] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74306 ] 00:14:41.403 [2024-07-12 12:39:07.220098] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.403 [2024-07-12 12:39:07.363563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.403 [2024-07-12 12:39:07.419305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:41.968 12:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.968 12:39:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:41.968 12:39:07 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.duiv4dqpxY 00:14:42.226 12:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:42.484 [2024-07-12 12:39:08.456234] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:42.484 nvme0n1 00:14:42.484 12:39:08 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:42.742 Running I/O for 1 seconds... 00:14:43.676 00:14:43.676 Latency(us) 00:14:43.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.676 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:43.676 Verification LBA range: start 0x0 length 0x2000 00:14:43.676 nvme0n1 : 1.03 3956.36 15.45 0.00 0.00 31919.63 7357.91 19422.49 00:14:43.676 =================================================================================================================== 00:14:43.676 Total : 3956.36 15.45 0.00 0.00 31919.63 7357.91 19422.49 00:14:43.676 0 00:14:43.676 12:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:14:43.676 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.676 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.934 12:39:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.934 12:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:14:43.934 "subsystems": [ 00:14:43.934 { 00:14:43.934 "subsystem": "keyring", 00:14:43.934 "config": [ 00:14:43.934 { 00:14:43.934 "method": "keyring_file_add_key", 00:14:43.934 "params": { 00:14:43.934 "name": "key0", 00:14:43.934 "path": "/tmp/tmp.duiv4dqpxY" 00:14:43.934 } 00:14:43.934 } 00:14:43.934 ] 00:14:43.934 }, 00:14:43.934 { 00:14:43.934 "subsystem": "iobuf", 00:14:43.934 "config": [ 00:14:43.934 { 00:14:43.934 "method": "iobuf_set_options", 00:14:43.934 "params": { 00:14:43.934 "small_pool_count": 8192, 00:14:43.934 "large_pool_count": 1024, 00:14:43.934 "small_bufsize": 8192, 00:14:43.934 "large_bufsize": 135168 00:14:43.934 } 00:14:43.934 } 00:14:43.934 ] 00:14:43.934 }, 00:14:43.934 { 00:14:43.934 "subsystem": "sock", 00:14:43.934 "config": [ 00:14:43.934 { 00:14:43.934 "method": "sock_set_default_impl", 00:14:43.934 "params": { 00:14:43.934 "impl_name": "uring" 00:14:43.934 } 00:14:43.934 }, 00:14:43.934 { 00:14:43.934 "method": "sock_impl_set_options", 00:14:43.934 "params": { 00:14:43.934 "impl_name": "ssl", 00:14:43.934 "recv_buf_size": 4096, 00:14:43.934 "send_buf_size": 4096, 00:14:43.934 "enable_recv_pipe": true, 00:14:43.934 "enable_quickack": false, 00:14:43.934 "enable_placement_id": 0, 00:14:43.934 "enable_zerocopy_send_server": true, 00:14:43.934 "enable_zerocopy_send_client": false, 00:14:43.934 "zerocopy_threshold": 0, 00:14:43.934 "tls_version": 0, 00:14:43.934 "enable_ktls": false 00:14:43.934 } 00:14:43.934 }, 00:14:43.934 { 00:14:43.934 "method": "sock_impl_set_options", 00:14:43.934 "params": { 00:14:43.934 "impl_name": "posix", 00:14:43.934 "recv_buf_size": 2097152, 00:14:43.934 "send_buf_size": 2097152, 00:14:43.934 "enable_recv_pipe": true, 00:14:43.934 "enable_quickack": false, 00:14:43.934 "enable_placement_id": 0, 00:14:43.934 "enable_zerocopy_send_server": true, 00:14:43.934 "enable_zerocopy_send_client": false, 00:14:43.934 "zerocopy_threshold": 0, 00:14:43.934 "tls_version": 0, 00:14:43.934 "enable_ktls": false 00:14:43.934 } 00:14:43.934 }, 00:14:43.934 { 00:14:43.934 "method": "sock_impl_set_options", 00:14:43.934 "params": { 00:14:43.934 "impl_name": "uring", 00:14:43.934 "recv_buf_size": 2097152, 00:14:43.934 "send_buf_size": 2097152, 00:14:43.934 "enable_recv_pipe": true, 00:14:43.934 "enable_quickack": false, 00:14:43.934 "enable_placement_id": 0, 00:14:43.934 "enable_zerocopy_send_server": false, 00:14:43.934 "enable_zerocopy_send_client": false, 00:14:43.934 "zerocopy_threshold": 0, 00:14:43.934 "tls_version": 0, 00:14:43.934 "enable_ktls": false 00:14:43.934 } 00:14:43.934 } 00:14:43.934 ] 00:14:43.934 }, 00:14:43.934 { 00:14:43.934 "subsystem": "vmd", 00:14:43.934 "config": [] 00:14:43.934 }, 00:14:43.934 { 00:14:43.934 "subsystem": "accel", 00:14:43.934 "config": [ 00:14:43.934 { 00:14:43.934 "method": "accel_set_options", 00:14:43.934 "params": { 00:14:43.934 "small_cache_size": 128, 00:14:43.934 "large_cache_size": 16, 00:14:43.934 "task_count": 2048, 00:14:43.934 "sequence_count": 2048, 00:14:43.934 "buf_count": 2048 00:14:43.934 } 00:14:43.934 } 00:14:43.934 ] 00:14:43.934 }, 00:14:43.934 { 00:14:43.934 "subsystem": "bdev", 00:14:43.934 "config": [ 00:14:43.934 { 00:14:43.934 "method": "bdev_set_options", 00:14:43.934 "params": { 00:14:43.934 "bdev_io_pool_size": 65535, 00:14:43.934 "bdev_io_cache_size": 256, 00:14:43.934 "bdev_auto_examine": true, 00:14:43.934 "iobuf_small_cache_size": 128, 00:14:43.934 "iobuf_large_cache_size": 16 00:14:43.934 } 00:14:43.934 }, 00:14:43.934 { 00:14:43.934 "method": "bdev_raid_set_options", 00:14:43.934 "params": { 00:14:43.934 "process_window_size_kb": 1024 00:14:43.934 } 00:14:43.934 }, 00:14:43.934 { 00:14:43.934 "method": "bdev_iscsi_set_options", 00:14:43.934 "params": { 00:14:43.934 "timeout_sec": 30 00:14:43.934 } 00:14:43.934 }, 00:14:43.934 { 00:14:43.934 "method": "bdev_nvme_set_options", 00:14:43.934 "params": { 00:14:43.934 "action_on_timeout": "none", 00:14:43.934 "timeout_us": 0, 00:14:43.934 "timeout_admin_us": 0, 00:14:43.934 "keep_alive_timeout_ms": 10000, 00:14:43.934 "arbitration_burst": 0, 00:14:43.934 "low_priority_weight": 0, 00:14:43.934 "medium_priority_weight": 0, 00:14:43.934 "high_priority_weight": 0, 00:14:43.934 "nvme_adminq_poll_period_us": 10000, 00:14:43.934 "nvme_ioq_poll_period_us": 0, 00:14:43.934 "io_queue_requests": 0, 00:14:43.934 "delay_cmd_submit": true, 00:14:43.934 "transport_retry_count": 4, 00:14:43.935 "bdev_retry_count": 3, 00:14:43.935 "transport_ack_timeout": 0, 00:14:43.935 "ctrlr_loss_timeout_sec": 0, 00:14:43.935 "reconnect_delay_sec": 0, 00:14:43.935 "fast_io_fail_timeout_sec": 0, 00:14:43.935 "disable_auto_failback": false, 00:14:43.935 "generate_uuids": false, 00:14:43.935 "transport_tos": 0, 00:14:43.935 "nvme_error_stat": false, 00:14:43.935 "rdma_srq_size": 0, 00:14:43.935 "io_path_stat": false, 00:14:43.935 "allow_accel_sequence": false, 00:14:43.935 "rdma_max_cq_size": 0, 00:14:43.935 "rdma_cm_event_timeout_ms": 0, 00:14:43.935 "dhchap_digests": [ 00:14:43.935 "sha256", 00:14:43.935 "sha384", 00:14:43.935 "sha512" 00:14:43.935 ], 00:14:43.935 "dhchap_dhgroups": [ 00:14:43.935 "null", 00:14:43.935 "ffdhe2048", 00:14:43.935 "ffdhe3072", 00:14:43.935 "ffdhe4096", 00:14:43.935 "ffdhe6144", 00:14:43.935 "ffdhe8192" 00:14:43.935 ] 00:14:43.935 } 00:14:43.935 }, 00:14:43.935 { 00:14:43.935 "method": "bdev_nvme_set_hotplug", 00:14:43.935 "params": { 00:14:43.935 "period_us": 100000, 00:14:43.935 "enable": false 00:14:43.935 } 00:14:43.935 }, 00:14:43.935 { 00:14:43.935 "method": "bdev_malloc_create", 00:14:43.935 "params": { 00:14:43.935 "name": "malloc0", 00:14:43.935 "num_blocks": 8192, 00:14:43.935 "block_size": 4096, 00:14:43.935 "physical_block_size": 4096, 00:14:43.935 "uuid": "84a65388-f9c4-44c2-9f5c-402c94cac05b", 00:14:43.935 "optimal_io_boundary": 0 00:14:43.935 } 00:14:43.935 }, 00:14:43.935 { 00:14:43.935 "method": "bdev_wait_for_examine" 00:14:43.935 } 00:14:43.935 ] 00:14:43.935 }, 00:14:43.935 { 00:14:43.935 "subsystem": "nbd", 00:14:43.935 "config": [] 00:14:43.935 }, 00:14:43.935 { 00:14:43.935 "subsystem": "scheduler", 00:14:43.935 "config": [ 00:14:43.935 { 00:14:43.935 "method": "framework_set_scheduler", 00:14:43.935 "params": { 00:14:43.935 "name": "static" 00:14:43.935 } 00:14:43.935 } 00:14:43.935 ] 00:14:43.935 }, 00:14:43.935 { 00:14:43.935 "subsystem": "nvmf", 00:14:43.935 "config": [ 00:14:43.935 { 00:14:43.935 "method": "nvmf_set_config", 00:14:43.935 "params": { 00:14:43.935 "discovery_filter": "match_any", 00:14:43.935 "admin_cmd_passthru": { 00:14:43.935 "identify_ctrlr": false 00:14:43.935 } 00:14:43.935 } 00:14:43.935 }, 00:14:43.935 { 00:14:43.935 "method": "nvmf_set_max_subsystems", 00:14:43.935 "params": { 00:14:43.935 "max_subsystems": 1024 00:14:43.935 } 00:14:43.935 }, 00:14:43.935 { 00:14:43.935 "method": "nvmf_set_crdt", 00:14:43.935 "params": { 00:14:43.935 "crdt1": 0, 00:14:43.935 "crdt2": 0, 00:14:43.935 "crdt3": 0 00:14:43.935 } 00:14:43.935 }, 00:14:43.935 { 00:14:43.935 "method": "nvmf_create_transport", 00:14:43.935 "params": { 00:14:43.935 "trtype": "TCP", 00:14:43.935 "max_queue_depth": 128, 00:14:43.935 "max_io_qpairs_per_ctrlr": 127, 00:14:43.935 "in_capsule_data_size": 4096, 00:14:43.935 "max_io_size": 131072, 00:14:43.935 "io_unit_size": 131072, 00:14:43.935 "max_aq_depth": 128, 00:14:43.935 "num_shared_buffers": 511, 00:14:43.935 "buf_cache_size": 4294967295, 00:14:43.935 "dif_insert_or_strip": false, 00:14:43.935 "zcopy": false, 00:14:43.935 "c2h_success": false, 00:14:43.935 "sock_priority": 0, 00:14:43.935 "abort_timeout_sec": 1, 00:14:43.935 "ack_timeout": 0, 00:14:43.935 "data_wr_pool_size": 0 00:14:43.935 } 00:14:43.935 }, 00:14:43.935 { 00:14:43.935 "method": "nvmf_create_subsystem", 00:14:43.935 "params": { 00:14:43.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.935 "allow_any_host": false, 00:14:43.935 "serial_number": "00000000000000000000", 00:14:43.935 "model_number": "SPDK bdev Controller", 00:14:43.935 "max_namespaces": 32, 00:14:43.935 "min_cntlid": 1, 00:14:43.935 "max_cntlid": 65519, 00:14:43.935 "ana_reporting": false 00:14:43.935 } 00:14:43.935 }, 00:14:43.935 { 00:14:43.935 "method": "nvmf_subsystem_add_host", 00:14:43.935 "params": { 00:14:43.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.935 "host": "nqn.2016-06.io.spdk:host1", 00:14:43.935 "psk": "key0" 00:14:43.935 } 00:14:43.935 }, 00:14:43.935 { 00:14:43.935 "method": "nvmf_subsystem_add_ns", 00:14:43.935 "params": { 00:14:43.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.935 "namespace": { 00:14:43.935 "nsid": 1, 00:14:43.935 "bdev_name": "malloc0", 00:14:43.935 "nguid": "84A65388F9C444C29F5C402C94CAC05B", 00:14:43.935 "uuid": "84a65388-f9c4-44c2-9f5c-402c94cac05b", 00:14:43.935 "no_auto_visible": false 00:14:43.935 } 00:14:43.935 } 00:14:43.935 }, 00:14:43.935 { 00:14:43.935 "method": "nvmf_subsystem_add_listener", 00:14:43.935 "params": { 00:14:43.935 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.935 "listen_address": { 00:14:43.935 "trtype": "TCP", 00:14:43.935 "adrfam": "IPv4", 00:14:43.935 "traddr": "10.0.0.2", 00:14:43.935 "trsvcid": "4420" 00:14:43.935 }, 00:14:43.935 "secure_channel": true 00:14:43.935 } 00:14:43.935 } 00:14:43.935 ] 00:14:43.935 } 00:14:43.935 ] 00:14:43.935 }' 00:14:43.935 12:39:09 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:14:44.194 12:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:14:44.194 "subsystems": [ 00:14:44.194 { 00:14:44.194 "subsystem": "keyring", 00:14:44.194 "config": [ 00:14:44.194 { 00:14:44.194 "method": "keyring_file_add_key", 00:14:44.194 "params": { 00:14:44.194 "name": "key0", 00:14:44.194 "path": "/tmp/tmp.duiv4dqpxY" 00:14:44.194 } 00:14:44.194 } 00:14:44.194 ] 00:14:44.194 }, 00:14:44.194 { 00:14:44.194 "subsystem": "iobuf", 00:14:44.194 "config": [ 00:14:44.194 { 00:14:44.194 "method": "iobuf_set_options", 00:14:44.194 "params": { 00:14:44.194 "small_pool_count": 8192, 00:14:44.194 "large_pool_count": 1024, 00:14:44.194 "small_bufsize": 8192, 00:14:44.194 "large_bufsize": 135168 00:14:44.194 } 00:14:44.194 } 00:14:44.194 ] 00:14:44.194 }, 00:14:44.194 { 00:14:44.194 "subsystem": "sock", 00:14:44.194 "config": [ 00:14:44.194 { 00:14:44.194 "method": "sock_set_default_impl", 00:14:44.194 "params": { 00:14:44.194 "impl_name": "uring" 00:14:44.194 } 00:14:44.194 }, 00:14:44.194 { 00:14:44.194 "method": "sock_impl_set_options", 00:14:44.194 "params": { 00:14:44.194 "impl_name": "ssl", 00:14:44.194 "recv_buf_size": 4096, 00:14:44.194 "send_buf_size": 4096, 00:14:44.194 "enable_recv_pipe": true, 00:14:44.194 "enable_quickack": false, 00:14:44.194 "enable_placement_id": 0, 00:14:44.194 "enable_zerocopy_send_server": true, 00:14:44.194 "enable_zerocopy_send_client": false, 00:14:44.194 "zerocopy_threshold": 0, 00:14:44.194 "tls_version": 0, 00:14:44.194 "enable_ktls": false 00:14:44.194 } 00:14:44.194 }, 00:14:44.194 { 00:14:44.194 "method": "sock_impl_set_options", 00:14:44.194 "params": { 00:14:44.194 "impl_name": "posix", 00:14:44.194 "recv_buf_size": 2097152, 00:14:44.194 "send_buf_size": 2097152, 00:14:44.194 "enable_recv_pipe": true, 00:14:44.194 "enable_quickack": false, 00:14:44.194 "enable_placement_id": 0, 00:14:44.194 "enable_zerocopy_send_server": true, 00:14:44.194 "enable_zerocopy_send_client": false, 00:14:44.194 "zerocopy_threshold": 0, 00:14:44.194 "tls_version": 0, 00:14:44.194 "enable_ktls": false 00:14:44.194 } 00:14:44.194 }, 00:14:44.194 { 00:14:44.194 "method": "sock_impl_set_options", 00:14:44.194 "params": { 00:14:44.194 "impl_name": "uring", 00:14:44.194 "recv_buf_size": 2097152, 00:14:44.194 "send_buf_size": 2097152, 00:14:44.194 "enable_recv_pipe": true, 00:14:44.194 "enable_quickack": false, 00:14:44.194 "enable_placement_id": 0, 00:14:44.194 "enable_zerocopy_send_server": false, 00:14:44.194 "enable_zerocopy_send_client": false, 00:14:44.194 "zerocopy_threshold": 0, 00:14:44.194 "tls_version": 0, 00:14:44.194 "enable_ktls": false 00:14:44.194 } 00:14:44.194 } 00:14:44.194 ] 00:14:44.194 }, 00:14:44.194 { 00:14:44.194 "subsystem": "vmd", 00:14:44.194 "config": [] 00:14:44.194 }, 00:14:44.194 { 00:14:44.194 "subsystem": "accel", 00:14:44.194 "config": [ 00:14:44.194 { 00:14:44.194 "method": "accel_set_options", 00:14:44.194 "params": { 00:14:44.194 "small_cache_size": 128, 00:14:44.194 "large_cache_size": 16, 00:14:44.194 "task_count": 2048, 00:14:44.194 "sequence_count": 2048, 00:14:44.194 "buf_count": 2048 00:14:44.194 } 00:14:44.194 } 00:14:44.194 ] 00:14:44.194 }, 00:14:44.194 { 00:14:44.194 "subsystem": "bdev", 00:14:44.194 "config": [ 00:14:44.194 { 00:14:44.194 "method": "bdev_set_options", 00:14:44.194 "params": { 00:14:44.194 "bdev_io_pool_size": 65535, 00:14:44.194 "bdev_io_cache_size": 256, 00:14:44.194 "bdev_auto_examine": true, 00:14:44.194 "iobuf_small_cache_size": 128, 00:14:44.194 "iobuf_large_cache_size": 16 00:14:44.194 } 00:14:44.194 }, 00:14:44.194 { 00:14:44.194 "method": "bdev_raid_set_options", 00:14:44.194 "params": { 00:14:44.194 "process_window_size_kb": 1024 00:14:44.194 } 00:14:44.194 }, 00:14:44.194 { 00:14:44.194 "method": "bdev_iscsi_set_options", 00:14:44.194 "params": { 00:14:44.194 "timeout_sec": 30 00:14:44.194 } 00:14:44.194 }, 00:14:44.194 { 00:14:44.194 "method": "bdev_nvme_set_options", 00:14:44.194 "params": { 00:14:44.194 "action_on_timeout": "none", 00:14:44.194 "timeout_us": 0, 00:14:44.194 "timeout_admin_us": 0, 00:14:44.194 "keep_alive_timeout_ms": 10000, 00:14:44.194 "arbitration_burst": 0, 00:14:44.194 "low_priority_weight": 0, 00:14:44.194 "medium_priority_weight": 0, 00:14:44.194 "high_priority_weight": 0, 00:14:44.194 "nvme_adminq_poll_period_us": 10000, 00:14:44.194 "nvme_ioq_poll_period_us": 0, 00:14:44.194 "io_queue_requests": 512, 00:14:44.194 "delay_cmd_submit": true, 00:14:44.194 "transport_retry_count": 4, 00:14:44.194 "bdev_retry_count": 3, 00:14:44.194 "transport_ack_timeout": 0, 00:14:44.194 "ctrlr_loss_timeout_sec": 0, 00:14:44.194 "reconnect_delay_sec": 0, 00:14:44.194 "fast_io_fail_timeout_sec": 0, 00:14:44.194 "disable_auto_failback": false, 00:14:44.194 "generate_uuids": false, 00:14:44.194 "transport_tos": 0, 00:14:44.194 "nvme_error_stat": false, 00:14:44.194 "rdma_srq_size": 0, 00:14:44.194 "io_path_stat": false, 00:14:44.194 "allow_accel_sequence": false, 00:14:44.194 "rdma_max_cq_size": 0, 00:14:44.194 "rdma_cm_event_timeout_ms": 0, 00:14:44.194 "dhchap_digests": [ 00:14:44.194 "sha256", 00:14:44.194 "sha384", 00:14:44.194 "sha512" 00:14:44.194 ], 00:14:44.194 "dhchap_dhgroups": [ 00:14:44.194 "null", 00:14:44.194 "ffdhe2048", 00:14:44.194 "ffdhe3072", 00:14:44.194 "ffdhe4096", 00:14:44.194 "ffdhe6144", 00:14:44.194 "ffdhe8192" 00:14:44.194 ] 00:14:44.194 } 00:14:44.194 }, 00:14:44.194 { 00:14:44.194 "method": "bdev_nvme_attach_controller", 00:14:44.194 "params": { 00:14:44.195 "name": "nvme0", 00:14:44.195 "trtype": "TCP", 00:14:44.195 "adrfam": "IPv4", 00:14:44.195 "traddr": "10.0.0.2", 00:14:44.195 "trsvcid": "4420", 00:14:44.195 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.195 "prchk_reftag": false, 00:14:44.195 "prchk_guard": false, 00:14:44.195 "ctrlr_loss_timeout_sec": 0, 00:14:44.195 "reconnect_delay_sec": 0, 00:14:44.195 "fast_io_fail_timeout_sec": 0, 00:14:44.195 "psk": "key0", 00:14:44.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:44.195 "hdgst": false, 00:14:44.195 "ddgst": false 00:14:44.195 } 00:14:44.195 }, 00:14:44.195 { 00:14:44.195 "method": "bdev_nvme_set_hotplug", 00:14:44.195 "params": { 00:14:44.195 "period_us": 100000, 00:14:44.195 "enable": false 00:14:44.195 } 00:14:44.195 }, 00:14:44.195 { 00:14:44.195 "method": "bdev_enable_histogram", 00:14:44.195 "params": { 00:14:44.195 "name": "nvme0n1", 00:14:44.195 "enable": true 00:14:44.195 } 00:14:44.195 }, 00:14:44.195 { 00:14:44.195 "method": "bdev_wait_for_examine" 00:14:44.195 } 00:14:44.195 ] 00:14:44.195 }, 00:14:44.195 { 00:14:44.195 "subsystem": "nbd", 00:14:44.195 "config": [] 00:14:44.195 } 00:14:44.195 ] 00:14:44.195 }' 00:14:44.195 12:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 74306 00:14:44.195 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74306 ']' 00:14:44.195 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74306 00:14:44.195 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:44.195 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.195 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74306 00:14:44.195 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:44.195 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:44.195 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74306' 00:14:44.195 killing process with pid 74306 00:14:44.195 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74306 00:14:44.195 Received shutdown signal, test time was about 1.000000 seconds 00:14:44.195 00:14:44.195 Latency(us) 00:14:44.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.195 =================================================================================================================== 00:14:44.195 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.195 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74306 00:14:44.454 12:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 74269 00:14:44.454 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74269 ']' 00:14:44.454 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74269 00:14:44.454 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:44.454 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.454 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74269 00:14:44.454 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:44.454 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:44.454 killing process with pid 74269 00:14:44.454 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74269' 00:14:44.454 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74269 00:14:44.454 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74269 00:14:44.713 12:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:14:44.713 12:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:44.713 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:44.713 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.713 12:39:10 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:14:44.713 "subsystems": [ 00:14:44.713 { 00:14:44.713 "subsystem": "keyring", 00:14:44.713 "config": [ 00:14:44.713 { 00:14:44.713 "method": "keyring_file_add_key", 00:14:44.713 "params": { 00:14:44.713 "name": "key0", 00:14:44.713 "path": "/tmp/tmp.duiv4dqpxY" 00:14:44.713 } 00:14:44.713 } 00:14:44.713 ] 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "subsystem": "iobuf", 00:14:44.713 "config": [ 00:14:44.713 { 00:14:44.713 "method": "iobuf_set_options", 00:14:44.713 "params": { 00:14:44.713 "small_pool_count": 8192, 00:14:44.713 "large_pool_count": 1024, 00:14:44.713 "small_bufsize": 8192, 00:14:44.713 "large_bufsize": 135168 00:14:44.713 } 00:14:44.713 } 00:14:44.713 ] 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "subsystem": "sock", 00:14:44.713 "config": [ 00:14:44.713 { 00:14:44.713 "method": "sock_set_default_impl", 00:14:44.713 "params": { 00:14:44.713 "impl_name": "uring" 00:14:44.713 } 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "method": "sock_impl_set_options", 00:14:44.713 "params": { 00:14:44.713 "impl_name": "ssl", 00:14:44.713 "recv_buf_size": 4096, 00:14:44.713 "send_buf_size": 4096, 00:14:44.713 "enable_recv_pipe": true, 00:14:44.713 "enable_quickack": false, 00:14:44.713 "enable_placement_id": 0, 00:14:44.713 "enable_zerocopy_send_server": true, 00:14:44.713 "enable_zerocopy_send_client": false, 00:14:44.713 "zerocopy_threshold": 0, 00:14:44.713 "tls_version": 0, 00:14:44.713 "enable_ktls": false 00:14:44.713 } 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "method": "sock_impl_set_options", 00:14:44.713 "params": { 00:14:44.713 "impl_name": "posix", 00:14:44.713 "recv_buf_size": 2097152, 00:14:44.713 "send_buf_size": 2097152, 00:14:44.713 "enable_recv_pipe": true, 00:14:44.713 "enable_quickack": false, 00:14:44.713 "enable_placement_id": 0, 00:14:44.713 "enable_zerocopy_send_server": true, 00:14:44.713 "enable_zerocopy_send_client": false, 00:14:44.713 "zerocopy_threshold": 0, 00:14:44.713 "tls_version": 0, 00:14:44.713 "enable_ktls": false 00:14:44.713 } 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "method": "sock_impl_set_options", 00:14:44.713 "params": { 00:14:44.713 "impl_name": "uring", 00:14:44.713 "recv_buf_size": 2097152, 00:14:44.713 "send_buf_size": 2097152, 00:14:44.713 "enable_recv_pipe": true, 00:14:44.713 "enable_quickack": false, 00:14:44.713 "enable_placement_id": 0, 00:14:44.713 "enable_zerocopy_send_server": false, 00:14:44.713 "enable_zerocopy_send_client": false, 00:14:44.713 "zerocopy_threshold": 0, 00:14:44.713 "tls_version": 0, 00:14:44.713 "enable_ktls": false 00:14:44.713 } 00:14:44.713 } 00:14:44.713 ] 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "subsystem": "vmd", 00:14:44.713 "config": [] 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "subsystem": "accel", 00:14:44.713 "config": [ 00:14:44.713 { 00:14:44.713 "method": "accel_set_options", 00:14:44.713 "params": { 00:14:44.713 "small_cache_size": 128, 00:14:44.713 "large_cache_size": 16, 00:14:44.713 "task_count": 2048, 00:14:44.713 "sequence_count": 2048, 00:14:44.713 "buf_count": 2048 00:14:44.713 } 00:14:44.713 } 00:14:44.713 ] 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "subsystem": "bdev", 00:14:44.713 "config": [ 00:14:44.713 { 00:14:44.713 "method": "bdev_set_options", 00:14:44.713 "params": { 00:14:44.713 "bdev_io_pool_size": 65535, 00:14:44.713 "bdev_io_cache_size": 256, 00:14:44.713 "bdev_auto_examine": true, 00:14:44.713 "iobuf_small_cache_size": 128, 00:14:44.713 "iobuf_large_cache_size": 16 00:14:44.713 } 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "method": "bdev_raid_set_options", 00:14:44.713 "params": { 00:14:44.713 "process_window_size_kb": 1024 00:14:44.713 } 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "method": "bdev_iscsi_set_options", 00:14:44.713 "params": { 00:14:44.713 "timeout_sec": 30 00:14:44.713 } 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "method": "bdev_nvme_set_options", 00:14:44.713 "params": { 00:14:44.713 "action_on_timeout": "none", 00:14:44.713 "timeout_us": 0, 00:14:44.713 "timeout_admin_us": 0, 00:14:44.713 "keep_alive_timeout_ms": 10000, 00:14:44.713 "arbitration_burst": 0, 00:14:44.713 "low_priority_weight": 0, 00:14:44.713 "medium_priority_weight": 0, 00:14:44.713 "high_priority_weight": 0, 00:14:44.713 "nvme_adminq_poll_period_us": 10000, 00:14:44.713 "nvme_ioq_poll_period_us": 0, 00:14:44.713 "io_queue_requests": 0, 00:14:44.713 "delay_cmd_submit": true, 00:14:44.713 "transport_retry_count": 4, 00:14:44.713 "bdev_retry_count": 3, 00:14:44.713 "transport_ack_timeout": 0, 00:14:44.713 "ctrlr_loss_timeout_sec": 0, 00:14:44.713 "reconnect_delay_sec": 0, 00:14:44.713 "fast_io_fail_timeout_sec": 0, 00:14:44.713 "disable_auto_failback": false, 00:14:44.713 "generate_uuids": false, 00:14:44.713 "transport_tos": 0, 00:14:44.713 "nvme_error_stat": false, 00:14:44.713 "rdma_srq_size": 0, 00:14:44.713 "io_path_stat": false, 00:14:44.713 "allow_accel_sequence": false, 00:14:44.713 "rdma_max_cq_size": 0, 00:14:44.713 "rdma_cm_event_timeout_ms": 0, 00:14:44.713 "dhchap_digests": [ 00:14:44.713 "sha256", 00:14:44.713 "sha384", 00:14:44.713 "sha512" 00:14:44.713 ], 00:14:44.713 "dhchap_dhgroups": [ 00:14:44.713 "null", 00:14:44.713 "ffdhe2048", 00:14:44.713 "ffdhe3072", 00:14:44.713 "ffdhe4096", 00:14:44.713 "ffdhe6144", 00:14:44.713 "ffdhe8192" 00:14:44.713 ] 00:14:44.713 } 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "method": "bdev_nvme_set_hotplug", 00:14:44.713 "params": { 00:14:44.713 "period_us": 100000, 00:14:44.713 "enable": false 00:14:44.713 } 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "method": "bdev_malloc_create", 00:14:44.713 "params": { 00:14:44.713 "name": "malloc0", 00:14:44.713 "num_blocks": 8192, 00:14:44.713 "block_size": 4096, 00:14:44.713 "physical_block_size": 4096, 00:14:44.713 "uuid": "84a65388-f9c4-44c2-9f5c-402c94cac05b", 00:14:44.713 "optimal_io_boundary": 0 00:14:44.713 } 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "method": "bdev_wait_for_examine" 00:14:44.713 } 00:14:44.713 ] 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "subsystem": "nbd", 00:14:44.713 "config": [] 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "subsystem": "scheduler", 00:14:44.713 "config": [ 00:14:44.713 { 00:14:44.713 "method": "framework_set_scheduler", 00:14:44.713 "params": { 00:14:44.713 "name": "static" 00:14:44.713 } 00:14:44.713 } 00:14:44.713 ] 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "subsystem": "nvmf", 00:14:44.713 "config": [ 00:14:44.713 { 00:14:44.713 "method": "nvmf_set_config", 00:14:44.713 "params": { 00:14:44.713 "discovery_filter": "match_any", 00:14:44.713 "admin_cmd_passthru": { 00:14:44.713 "identify_ctrlr": false 00:14:44.713 } 00:14:44.713 } 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "method": "nvmf_set_max_subsystems", 00:14:44.713 "params": { 00:14:44.713 "max_subsystems": 1024 00:14:44.713 } 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "method": "nvmf_set_crdt", 00:14:44.713 "params": { 00:14:44.713 "crdt1": 0, 00:14:44.713 "crdt2": 0, 00:14:44.713 "crdt3": 0 00:14:44.713 } 00:14:44.713 }, 00:14:44.713 { 00:14:44.713 "method": "nvmf_create_transport", 00:14:44.713 "params": { 00:14:44.713 "trtype": "TCP", 00:14:44.714 "max_queue_depth": 128, 00:14:44.714 "max_io_qpairs_per_ctrlr": 127, 00:14:44.714 "in_capsule_data_size": 4096, 00:14:44.714 "max_io_size": 131072, 00:14:44.714 "io_unit_size": 131072, 00:14:44.714 "max_aq_depth": 128, 00:14:44.714 "num_shared_buffers": 511, 00:14:44.714 "buf_cache_size": 4294967295, 00:14:44.714 "dif_insert_or_strip": false, 00:14:44.714 "zcopy": false, 00:14:44.714 "c2h_success": false, 00:14:44.714 "sock_priority": 0, 00:14:44.714 "abort_timeout_sec": 1, 00:14:44.714 "ack_timeout": 0, 00:14:44.714 "data_wr_pool_size": 0 00:14:44.714 } 00:14:44.714 }, 00:14:44.714 { 00:14:44.714 "method": "nvmf_create_subsystem", 00:14:44.714 "params": { 00:14:44.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.714 "allow_any_host": false, 00:14:44.714 "serial_number": "00000000000000000000", 00:14:44.714 "model_number": "SPDK bdev Controller", 00:14:44.714 "max_namespaces": 32, 00:14:44.714 "min_cntlid": 1, 00:14:44.714 "max_cntlid": 65519, 00:14:44.714 "ana_reporting": false 00:14:44.714 } 00:14:44.714 }, 00:14:44.714 { 00:14:44.714 "method": "nvmf_subsystem_add_host", 00:14:44.714 "params": { 00:14:44.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.714 "host": "nqn.2016-06.io.spdk:host1", 00:14:44.714 "psk": "key0" 00:14:44.714 } 00:14:44.714 }, 00:14:44.714 { 00:14:44.714 "method": "nvmf_subsystem_add_ns", 00:14:44.714 "params": { 00:14:44.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.714 "namespace": { 00:14:44.714 "nsid": 1, 00:14:44.714 "bdev_name": "malloc0", 00:14:44.714 "nguid": "84A65388F9C444C29F5C402C94CAC05B", 00:14:44.714 "uuid": "84a65388-f9c4-44c2-9f5c-402c94cac05b", 00:14:44.714 "no_auto_visible": false 00:14:44.714 } 00:14:44.714 } 00:14:44.714 }, 00:14:44.714 { 00:14:44.714 "method": "nvmf_subsystem_add_listener", 00:14:44.714 "params": { 00:14:44.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.714 "listen_address": { 00:14:44.714 "trtype": "TCP", 00:14:44.714 "adrfam": "IPv4", 00:14:44.714 "traddr": "10.0.0.2", 00:14:44.714 "trsvcid": "4420" 00:14:44.714 }, 00:14:44.714 "secure_channel": true 00:14:44.714 } 00:14:44.714 } 00:14:44.714 ] 00:14:44.714 } 00:14:44.714 ] 00:14:44.714 }' 00:14:44.714 12:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=74361 00:14:44.714 12:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 74361 00:14:44.714 12:39:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:44.714 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74361 ']' 00:14:44.714 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.714 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:44.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.714 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.714 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:44.714 12:39:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.973 [2024-07-12 12:39:10.828705] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:14:44.973 [2024-07-12 12:39:10.828855] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.973 [2024-07-12 12:39:10.965500] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.231 [2024-07-12 12:39:11.084109] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.231 [2024-07-12 12:39:11.084161] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.231 [2024-07-12 12:39:11.084173] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.231 [2024-07-12 12:39:11.084182] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.231 [2024-07-12 12:39:11.084189] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.231 [2024-07-12 12:39:11.084277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.231 [2024-07-12 12:39:11.255253] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:45.489 [2024-07-12 12:39:11.333766] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.489 [2024-07-12 12:39:11.365704] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:45.489 [2024-07-12 12:39:11.365970] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:46.056 12:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:46.056 12:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:46.056 12:39:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:46.056 12:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:46.056 12:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.056 12:39:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.056 12:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=74393 00:14:46.057 12:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 74393 /var/tmp/bdevperf.sock 00:14:46.057 12:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 74393 ']' 00:14:46.057 12:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:46.057 12:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:46.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:46.057 12:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:46.057 12:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:46.057 12:39:11 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:14:46.057 "subsystems": [ 00:14:46.057 { 00:14:46.057 "subsystem": "keyring", 00:14:46.057 "config": [ 00:14:46.057 { 00:14:46.057 "method": "keyring_file_add_key", 00:14:46.057 "params": { 00:14:46.057 "name": "key0", 00:14:46.057 "path": "/tmp/tmp.duiv4dqpxY" 00:14:46.057 } 00:14:46.057 } 00:14:46.057 ] 00:14:46.057 }, 00:14:46.057 { 00:14:46.057 "subsystem": "iobuf", 00:14:46.057 "config": [ 00:14:46.057 { 00:14:46.057 "method": "iobuf_set_options", 00:14:46.057 "params": { 00:14:46.057 "small_pool_count": 8192, 00:14:46.057 "large_pool_count": 1024, 00:14:46.057 "small_bufsize": 8192, 00:14:46.057 "large_bufsize": 135168 00:14:46.057 } 00:14:46.057 } 00:14:46.057 ] 00:14:46.057 }, 00:14:46.057 { 00:14:46.057 "subsystem": "sock", 00:14:46.057 "config": [ 00:14:46.057 { 00:14:46.057 "method": "sock_set_default_impl", 00:14:46.057 "params": { 00:14:46.057 "impl_name": "uring" 00:14:46.057 } 00:14:46.057 }, 00:14:46.057 { 00:14:46.057 "method": "sock_impl_set_options", 00:14:46.057 "params": { 00:14:46.057 "impl_name": "ssl", 00:14:46.057 "recv_buf_size": 4096, 00:14:46.057 "send_buf_size": 4096, 00:14:46.057 "enable_recv_pipe": true, 00:14:46.057 "enable_quickack": false, 00:14:46.057 "enable_placement_id": 0, 00:14:46.057 "enable_zerocopy_send_server": true, 00:14:46.057 "enable_zerocopy_send_client": false, 00:14:46.057 "zerocopy_threshold": 0, 00:14:46.057 "tls_version": 0, 00:14:46.057 "enable_ktls": false 00:14:46.057 } 00:14:46.057 }, 00:14:46.057 { 00:14:46.057 "method": "sock_impl_set_options", 00:14:46.057 "params": { 00:14:46.057 "impl_name": "posix", 00:14:46.057 "recv_buf_size": 2097152, 00:14:46.057 "send_buf_size": 2097152, 00:14:46.057 "enable_recv_pipe": true, 00:14:46.057 "enable_quickack": false, 00:14:46.057 "enable_placement_id": 0, 00:14:46.057 "enable_zerocopy_send_server": true, 00:14:46.057 "enable_zerocopy_send_client": false, 00:14:46.057 "zerocopy_threshold": 0, 00:14:46.057 "tls_version": 0, 00:14:46.057 "enable_ktls": false 00:14:46.057 } 00:14:46.057 }, 00:14:46.057 { 00:14:46.057 "method": "sock_impl_set_options", 00:14:46.057 "params": { 00:14:46.057 "impl_name": "uring", 00:14:46.057 "recv_buf_size": 2097152, 00:14:46.057 "send_buf_size": 2097152, 00:14:46.057 "enable_recv_pipe": true, 00:14:46.057 "enable_quickack": false, 00:14:46.057 "enable_placement_id": 0, 00:14:46.057 "enable_zerocopy_send_server": false, 00:14:46.057 "enable_zerocopy_send_client": false, 00:14:46.057 "zerocopy_threshold": 0, 00:14:46.057 "tls_version": 0, 00:14:46.057 "enable_ktls": false 00:14:46.057 } 00:14:46.057 } 00:14:46.057 ] 00:14:46.057 }, 00:14:46.057 { 00:14:46.057 "subsystem": "vmd", 00:14:46.057 "config": [] 00:14:46.057 }, 00:14:46.057 { 00:14:46.057 "subsystem": "accel", 00:14:46.057 "config": [ 00:14:46.057 { 00:14:46.057 "method": "accel_set_options", 00:14:46.057 "params": { 00:14:46.057 "small_cache_size": 128, 00:14:46.057 "large_cache_size": 16, 00:14:46.057 "task_count": 2048, 00:14:46.057 "sequence_count": 2048, 00:14:46.057 "buf_count": 2048 00:14:46.057 } 00:14:46.057 } 00:14:46.057 ] 00:14:46.057 }, 00:14:46.057 { 00:14:46.057 "subsystem": "bdev", 00:14:46.057 "config": [ 00:14:46.057 { 00:14:46.057 "method": "bdev_set_options", 00:14:46.057 "params": { 00:14:46.057 "bdev_io_pool_size": 65535, 00:14:46.057 "bdev_io_cache_size": 256, 00:14:46.057 "bdev_auto_examine": true, 00:14:46.057 "iobuf_small_cache_size": 128, 00:14:46.057 "iobuf_large_cache_size": 16 00:14:46.057 } 00:14:46.057 }, 00:14:46.057 { 00:14:46.057 "method": "bdev_raid_set_options", 00:14:46.057 "params": { 00:14:46.057 "process_window_size_kb": 1024 00:14:46.057 } 00:14:46.057 }, 00:14:46.057 { 00:14:46.057 "method": "bdev_iscsi_set_options", 00:14:46.057 "params": { 00:14:46.057 "timeout_sec": 30 00:14:46.057 } 00:14:46.057 }, 00:14:46.057 { 00:14:46.057 "method": "bdev_nvme_set_options", 00:14:46.057 "params": { 00:14:46.057 "action_on_timeout": "none", 00:14:46.057 "timeout_us": 0, 00:14:46.057 "timeout_admin_us": 0, 00:14:46.057 "keep_alive_timeout_ms": 10000, 00:14:46.057 "arbitration_burst": 0, 00:14:46.057 "low_priority_weight": 0, 00:14:46.057 "medium_priority_weight": 0, 00:14:46.057 "high_priority_weight": 0, 00:14:46.057 "nvme_adminq_poll_period_us": 10000, 00:14:46.057 "nvme_ioq_poll_period_us": 0, 00:14:46.057 "io_queue_requests": 512, 00:14:46.057 "delay_cmd_submit": true, 00:14:46.057 "transport_retry_count": 4, 00:14:46.057 "bdev_retry_count": 3, 00:14:46.057 "transport_ack_timeout": 0, 00:14:46.057 "ctrlr_loss_timeout_sec": 0, 00:14:46.057 "reconnect_delay_sec": 0, 00:14:46.057 "fast_io_fail_timeout_sec": 0, 00:14:46.057 "disable_auto_failback": false, 00:14:46.057 "generate_uuids": false, 00:14:46.057 "transport_tos": 0, 00:14:46.057 "nvme_error_stat": false, 00:14:46.057 "rdma_srq_size": 0, 00:14:46.057 "io_path_stat": false, 00:14:46.057 "allow_accel_sequence": false, 00:14:46.057 "rdma_max_cq_size": 0, 00:14:46.057 "rdma_cm_event_timeout_ms": 0, 00:14:46.057 "dhchap_digests": [ 00:14:46.057 "sha256", 00:14:46.057 "sha384", 00:14:46.057 "sha512" 00:14:46.057 ], 00:14:46.057 "dhchap_dhgroups": [ 00:14:46.057 "null", 00:14:46.057 "ffdhe2048", 00:14:46.057 "ffdhe3072", 00:14:46.057 "ffdhe4096", 00:14:46.057 "ffdhe6144", 00:14:46.057 "ffdhe8192" 00:14:46.057 ] 00:14:46.057 } 00:14:46.057 }, 00:14:46.057 { 00:14:46.057 "method": "bdev_nvme_attach_controller", 00:14:46.057 "params": { 00:14:46.057 "name": "nvme0", 00:14:46.057 "trtype": "TCP", 00:14:46.057 "adrfam": "IPv4", 00:14:46.057 "traddr": "10.0.0.2", 00:14:46.057 "trsvcid": "4420", 00:14:46.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.057 "prchk_reftag": false, 00:14:46.057 "prchk_guard": false, 00:14:46.057 "ctrlr_loss_timeout_sec": 0, 00:14:46.057 "reconnect_delay_sec": 0, 00:14:46.057 "fast_io_fail_timeout_sec": 0, 00:14:46.057 "psk": "key0", 00:14:46.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:46.057 "hdgst": false, 00:14:46.057 "ddgst": false 00:14:46.057 } 00:14:46.057 }, 00:14:46.057 { 00:14:46.057 "method": "bdev_nvme_set_hotplug", 00:14:46.057 "params": { 00:14:46.057 "period_us": 100000, 00:14:46.057 "enable": false 00:14:46.057 } 00:14:46.057 }, 00:14:46.057 { 00:14:46.057 "method": "bdev_enable_histogram", 00:14:46.057 "params": { 00:14:46.057 "name": "nvme0n1", 00:14:46.057 "enable": true 00:14:46.057 } 00:14:46.057 }, 00:14:46.057 { 00:14:46.057 "method": "bdev_wait_for_examine" 00:14:46.057 } 00:14:46.057 ] 00:14:46.057 }, 00:14:46.057 { 00:14:46.057 "subsystem": "nbd", 00:14:46.057 "config": [] 00:14:46.057 } 00:14:46.057 ] 00:14:46.057 }' 00:14:46.057 12:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:46.057 12:39:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.057 [2024-07-12 12:39:11.931758] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:14:46.057 [2024-07-12 12:39:11.931888] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74393 ] 00:14:46.058 [2024-07-12 12:39:12.070306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.316 [2024-07-12 12:39:12.231235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.316 [2024-07-12 12:39:12.374407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:46.575 [2024-07-12 12:39:12.432983] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:47.141 12:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:47.141 12:39:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:14:47.141 12:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:47.141 12:39:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:14:47.141 12:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.141 12:39:13 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:47.398 Running I/O for 1 seconds... 00:14:48.332 00:14:48.332 Latency(us) 00:14:48.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.332 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:48.332 Verification LBA range: start 0x0 length 0x2000 00:14:48.332 nvme0n1 : 1.02 3900.54 15.24 0.00 0.00 32436.01 290.44 23473.80 00:14:48.332 =================================================================================================================== 00:14:48.332 Total : 3900.54 15.24 0.00 0.00 32436.01 290.44 23473.80 00:14:48.332 0 00:14:48.332 12:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:14:48.332 12:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:14:48.332 12:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:48.332 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:14:48.332 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:14:48.332 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:48.332 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:48.332 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:48.332 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:48.332 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:48.332 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:48.332 nvmf_trace.0 00:14:48.590 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:14:48.590 12:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 74393 00:14:48.590 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74393 ']' 00:14:48.590 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74393 00:14:48.590 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:48.590 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:48.590 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74393 00:14:48.590 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:48.590 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:48.590 killing process with pid 74393 00:14:48.590 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74393' 00:14:48.590 Received shutdown signal, test time was about 1.000000 seconds 00:14:48.590 00:14:48.590 Latency(us) 00:14:48.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.590 =================================================================================================================== 00:14:48.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.590 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74393 00:14:48.590 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74393 00:14:48.848 12:39:14 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:48.848 12:39:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:48.849 rmmod nvme_tcp 00:14:48.849 rmmod nvme_fabrics 00:14:48.849 rmmod nvme_keyring 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 74361 ']' 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 74361 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 74361 ']' 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 74361 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74361 00:14:48.849 killing process with pid 74361 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74361' 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 74361 00:14:48.849 12:39:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 74361 00:14:49.107 12:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:49.107 12:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:49.107 12:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:49.107 12:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:49.107 12:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:49.108 12:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.108 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.108 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.367 12:39:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:49.367 12:39:15 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.qeG9HMEjVa /tmp/tmp.9U4WV5dbMk /tmp/tmp.duiv4dqpxY 00:14:49.367 ************************************ 00:14:49.367 END TEST nvmf_tls 00:14:49.367 ************************************ 00:14:49.367 00:14:49.367 real 1m27.625s 00:14:49.367 user 2m19.718s 00:14:49.367 sys 0m27.966s 00:14:49.367 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:49.367 12:39:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.367 12:39:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:49.367 12:39:15 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:49.367 12:39:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:49.367 12:39:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:49.367 12:39:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:49.367 ************************************ 00:14:49.367 START TEST nvmf_fips 00:14:49.367 ************************************ 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:49.367 * Looking for test storage... 00:14:49.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:14:49.367 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:14:49.368 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:14:49.626 Error setting digest 00:14:49.626 00E222BBF37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:14:49.626 00E222BBF37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:49.626 Cannot find device "nvmf_tgt_br" 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:49.626 Cannot find device "nvmf_tgt_br2" 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:49.626 Cannot find device "nvmf_tgt_br" 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:49.626 Cannot find device "nvmf_tgt_br2" 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:49.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:49.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:49.626 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:49.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:14:49.884 00:14:49.884 --- 10.0.0.2 ping statistics --- 00:14:49.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.884 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:49.884 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:49.884 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:14:49.884 00:14:49.884 --- 10.0.0.3 ping statistics --- 00:14:49.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.884 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:49.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:14:49.884 00:14:49.884 --- 10.0.0.1 ping statistics --- 00:14:49.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.884 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=74666 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 74666 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74666 ']' 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:49.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:49.884 12:39:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:50.142 [2024-07-12 12:39:16.029913] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:14:50.142 [2024-07-12 12:39:16.030028] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.142 [2024-07-12 12:39:16.169041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.400 [2024-07-12 12:39:16.285970] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.400 [2024-07-12 12:39:16.286052] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.400 [2024-07-12 12:39:16.286080] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.400 [2024-07-12 12:39:16.286104] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.400 [2024-07-12 12:39:16.286112] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.400 [2024-07-12 12:39:16.286141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.400 [2024-07-12 12:39:16.343096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:50.966 12:39:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:50.966 12:39:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:50.966 12:39:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:50.966 12:39:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:50.966 12:39:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:50.966 12:39:17 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.966 12:39:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:14:50.966 12:39:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:50.966 12:39:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:50.966 12:39:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:50.966 12:39:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:50.966 12:39:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:50.966 12:39:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:50.966 12:39:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:51.224 [2024-07-12 12:39:17.241331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:51.225 [2024-07-12 12:39:17.257253] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:51.225 [2024-07-12 12:39:17.257522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.225 [2024-07-12 12:39:17.290145] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:51.225 malloc0 00:14:51.483 12:39:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:51.483 12:39:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=74706 00:14:51.483 12:39:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:51.483 12:39:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 74706 /var/tmp/bdevperf.sock 00:14:51.483 12:39:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 74706 ']' 00:14:51.483 12:39:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:51.483 12:39:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:51.483 12:39:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:51.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:51.483 12:39:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:51.483 12:39:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:51.483 [2024-07-12 12:39:17.405749] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:14:51.483 [2024-07-12 12:39:17.406194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74706 ] 00:14:51.483 [2024-07-12 12:39:17.545902] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.741 [2024-07-12 12:39:17.711436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.741 [2024-07-12 12:39:17.772956] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:14:52.307 12:39:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:52.307 12:39:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:14:52.307 12:39:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:14:52.564 [2024-07-12 12:39:18.552825] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:52.564 [2024-07-12 12:39:18.552994] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:52.564 TLSTESTn1 00:14:52.822 12:39:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:52.822 Running I/O for 10 seconds... 00:15:02.790 00:15:02.790 Latency(us) 00:15:02.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.790 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:02.790 Verification LBA range: start 0x0 length 0x2000 00:15:02.790 TLSTESTn1 : 10.02 4017.51 15.69 0.00 0.00 31797.03 7417.48 27405.96 00:15:02.790 =================================================================================================================== 00:15:02.790 Total : 4017.51 15.69 0.00 0.00 31797.03 7417.48 27405.96 00:15:02.790 0 00:15:02.790 12:39:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:02.790 12:39:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:02.790 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:15:02.790 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:15:02.790 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:02.790 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:02.790 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:02.790 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:02.790 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:02.790 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:02.790 nvmf_trace.0 00:15:03.048 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:15:03.048 12:39:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 74706 00:15:03.048 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74706 ']' 00:15:03.048 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74706 00:15:03.048 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:03.048 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:03.048 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74706 00:15:03.048 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:15:03.048 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:15:03.048 killing process with pid 74706 00:15:03.048 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74706' 00:15:03.048 Received shutdown signal, test time was about 10.000000 seconds 00:15:03.048 00:15:03.048 Latency(us) 00:15:03.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.048 =================================================================================================================== 00:15:03.048 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:03.048 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74706 00:15:03.048 [2024-07-12 12:39:28.947015] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:03.048 12:39:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74706 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:03.306 rmmod nvme_tcp 00:15:03.306 rmmod nvme_fabrics 00:15:03.306 rmmod nvme_keyring 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 74666 ']' 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 74666 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 74666 ']' 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 74666 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 74666 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:03.306 killing process with pid 74666 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 74666' 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 74666 00:15:03.306 [2024-07-12 12:39:29.368387] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:03.306 12:39:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 74666 00:15:03.563 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:03.563 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:03.563 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:03.563 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:03.563 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:03.563 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.563 12:39:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.563 12:39:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.821 12:39:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:03.821 12:39:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:15:03.821 00:15:03.821 real 0m14.425s 00:15:03.821 user 0m19.795s 00:15:03.821 sys 0m5.663s 00:15:03.821 ************************************ 00:15:03.821 END TEST nvmf_fips 00:15:03.821 ************************************ 00:15:03.821 12:39:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:03.821 12:39:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:03.821 12:39:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:03.821 12:39:29 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:15:03.821 12:39:29 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:15:03.821 12:39:29 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:15:03.821 12:39:29 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:03.821 12:39:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:03.821 12:39:29 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:15:03.821 12:39:29 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:03.821 12:39:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:03.821 12:39:29 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:15:03.821 12:39:29 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:03.821 12:39:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:03.821 12:39:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.821 12:39:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:03.821 ************************************ 00:15:03.821 START TEST nvmf_identify 00:15:03.821 ************************************ 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:03.821 * Looking for test storage... 00:15:03.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.821 12:39:29 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.822 12:39:29 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.822 12:39:29 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.822 12:39:29 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.822 12:39:29 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:15:03.822 12:39:29 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.822 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:15:03.822 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:03.822 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:03.822 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.822 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.822 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.822 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:03.822 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:03.822 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:03.822 12:39:29 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:04.119 Cannot find device "nvmf_tgt_br" 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:04.119 Cannot find device "nvmf_tgt_br2" 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:04.119 Cannot find device "nvmf_tgt_br" 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:04.119 Cannot find device "nvmf_tgt_br2" 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:15:04.119 12:39:29 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:04.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:04.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:04.119 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:04.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:15:04.376 00:15:04.376 --- 10.0.0.2 ping statistics --- 00:15:04.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.376 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:04.376 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:04.376 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:15:04.376 00:15:04.376 --- 10.0.0.3 ping statistics --- 00:15:04.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.376 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:04.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:04.376 00:15:04.376 --- 10.0.0.1 ping statistics --- 00:15:04.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.376 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=75052 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 75052 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 75052 ']' 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:04.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:04.376 12:39:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:04.376 [2024-07-12 12:39:30.300185] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:15:04.376 [2024-07-12 12:39:30.300300] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.376 [2024-07-12 12:39:30.438532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:04.634 [2024-07-12 12:39:30.576360] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.634 [2024-07-12 12:39:30.576462] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.634 [2024-07-12 12:39:30.576489] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.634 [2024-07-12 12:39:30.576500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.634 [2024-07-12 12:39:30.576509] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.634 [2024-07-12 12:39:30.576688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.634 [2024-07-12 12:39:30.577025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.634 [2024-07-12 12:39:30.577543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.634 [2024-07-12 12:39:30.577554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.634 [2024-07-12 12:39:30.635507] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:05.585 [2024-07-12 12:39:31.328273] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:05.585 Malloc0 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:05.585 [2024-07-12 12:39:31.435930] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:05.585 [ 00:15:05.585 { 00:15:05.585 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:05.585 "subtype": "Discovery", 00:15:05.585 "listen_addresses": [ 00:15:05.585 { 00:15:05.585 "trtype": "TCP", 00:15:05.585 "adrfam": "IPv4", 00:15:05.585 "traddr": "10.0.0.2", 00:15:05.585 "trsvcid": "4420" 00:15:05.585 } 00:15:05.585 ], 00:15:05.585 "allow_any_host": true, 00:15:05.585 "hosts": [] 00:15:05.585 }, 00:15:05.585 { 00:15:05.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.585 "subtype": "NVMe", 00:15:05.585 "listen_addresses": [ 00:15:05.585 { 00:15:05.585 "trtype": "TCP", 00:15:05.585 "adrfam": "IPv4", 00:15:05.585 "traddr": "10.0.0.2", 00:15:05.585 "trsvcid": "4420" 00:15:05.585 } 00:15:05.585 ], 00:15:05.585 "allow_any_host": true, 00:15:05.585 "hosts": [], 00:15:05.585 "serial_number": "SPDK00000000000001", 00:15:05.585 "model_number": "SPDK bdev Controller", 00:15:05.585 "max_namespaces": 32, 00:15:05.585 "min_cntlid": 1, 00:15:05.585 "max_cntlid": 65519, 00:15:05.585 "namespaces": [ 00:15:05.585 { 00:15:05.585 "nsid": 1, 00:15:05.585 "bdev_name": "Malloc0", 00:15:05.585 "name": "Malloc0", 00:15:05.585 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:05.585 "eui64": "ABCDEF0123456789", 00:15:05.585 "uuid": "bfd2732e-e22f-4749-aa45-9056bba5a938" 00:15:05.585 } 00:15:05.585 ] 00:15:05.585 } 00:15:05.585 ] 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.585 12:39:31 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:05.585 [2024-07-12 12:39:31.497007] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:15:05.585 [2024-07-12 12:39:31.497091] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75093 ] 00:15:05.585 [2024-07-12 12:39:31.637915] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:05.585 [2024-07-12 12:39:31.637996] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:05.585 [2024-07-12 12:39:31.638003] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:05.585 [2024-07-12 12:39:31.638018] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:05.585 [2024-07-12 12:39:31.638026] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:05.585 [2024-07-12 12:39:31.638229] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:05.585 [2024-07-12 12:39:31.638284] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19162c0 0 00:15:05.852 [2024-07-12 12:39:31.650423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:05.852 [2024-07-12 12:39:31.650448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:05.852 [2024-07-12 12:39:31.650455] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:05.852 [2024-07-12 12:39:31.650458] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:05.852 [2024-07-12 12:39:31.650508] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.852 [2024-07-12 12:39:31.650515] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.852 [2024-07-12 12:39:31.650520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19162c0) 00:15:05.852 [2024-07-12 12:39:31.650535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:05.852 [2024-07-12 12:39:31.650568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957940, cid 0, qid 0 00:15:05.852 [2024-07-12 12:39:31.658422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.852 [2024-07-12 12:39:31.658443] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.852 [2024-07-12 12:39:31.658449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.852 [2024-07-12 12:39:31.658454] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957940) on tqpair=0x19162c0 00:15:05.852 [2024-07-12 12:39:31.658468] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:05.852 [2024-07-12 12:39:31.658477] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:05.852 [2024-07-12 12:39:31.658483] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:05.852 [2024-07-12 12:39:31.658502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.852 [2024-07-12 12:39:31.658507] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.852 [2024-07-12 12:39:31.658511] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19162c0) 00:15:05.852 [2024-07-12 12:39:31.658521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.852 [2024-07-12 12:39:31.658549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957940, cid 0, qid 0 00:15:05.852 [2024-07-12 12:39:31.658615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.852 [2024-07-12 12:39:31.658622] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.852 [2024-07-12 12:39:31.658626] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.852 [2024-07-12 12:39:31.658631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957940) on tqpair=0x19162c0 00:15:05.853 [2024-07-12 12:39:31.658637] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:05.853 [2024-07-12 12:39:31.658645] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:05.853 [2024-07-12 12:39:31.658653] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.658657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.658661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19162c0) 00:15:05.853 [2024-07-12 12:39:31.658669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.853 [2024-07-12 12:39:31.658688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957940, cid 0, qid 0 00:15:05.853 [2024-07-12 12:39:31.658732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.853 [2024-07-12 12:39:31.658739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.853 [2024-07-12 12:39:31.658742] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.658747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957940) on tqpair=0x19162c0 00:15:05.853 [2024-07-12 12:39:31.658753] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:05.853 [2024-07-12 12:39:31.658763] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:05.853 [2024-07-12 12:39:31.658770] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.658775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.658779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19162c0) 00:15:05.853 [2024-07-12 12:39:31.658786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.853 [2024-07-12 12:39:31.658804] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957940, cid 0, qid 0 00:15:05.853 [2024-07-12 12:39:31.658852] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.853 [2024-07-12 12:39:31.658859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.853 [2024-07-12 12:39:31.658863] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.658867] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957940) on tqpair=0x19162c0 00:15:05.853 [2024-07-12 12:39:31.658873] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:05.853 [2024-07-12 12:39:31.658884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.658888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.658892] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19162c0) 00:15:05.853 [2024-07-12 12:39:31.658900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.853 [2024-07-12 12:39:31.658917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957940, cid 0, qid 0 00:15:05.853 [2024-07-12 12:39:31.658968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.853 [2024-07-12 12:39:31.658975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.853 [2024-07-12 12:39:31.658979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.658983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957940) on tqpair=0x19162c0 00:15:05.853 [2024-07-12 12:39:31.658989] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:05.853 [2024-07-12 12:39:31.658994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:05.853 [2024-07-12 12:39:31.659002] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:05.853 [2024-07-12 12:39:31.659108] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:05.853 [2024-07-12 12:39:31.659114] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:05.853 [2024-07-12 12:39:31.659123] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659132] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19162c0) 00:15:05.853 [2024-07-12 12:39:31.659140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.853 [2024-07-12 12:39:31.659159] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957940, cid 0, qid 0 00:15:05.853 [2024-07-12 12:39:31.659209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.853 [2024-07-12 12:39:31.659216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.853 [2024-07-12 12:39:31.659220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659225] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957940) on tqpair=0x19162c0 00:15:05.853 [2024-07-12 12:39:31.659238] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:05.853 [2024-07-12 12:39:31.659248] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659253] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659257] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19162c0) 00:15:05.853 [2024-07-12 12:39:31.659265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.853 [2024-07-12 12:39:31.659282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957940, cid 0, qid 0 00:15:05.853 [2024-07-12 12:39:31.659344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.853 [2024-07-12 12:39:31.659352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.853 [2024-07-12 12:39:31.659356] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957940) on tqpair=0x19162c0 00:15:05.853 [2024-07-12 12:39:31.659365] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:05.853 [2024-07-12 12:39:31.659370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:05.853 [2024-07-12 12:39:31.659379] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:05.853 [2024-07-12 12:39:31.659389] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:05.853 [2024-07-12 12:39:31.659401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19162c0) 00:15:05.853 [2024-07-12 12:39:31.659427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.853 [2024-07-12 12:39:31.659448] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957940, cid 0, qid 0 00:15:05.853 [2024-07-12 12:39:31.659537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.853 [2024-07-12 12:39:31.659545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.853 [2024-07-12 12:39:31.659549] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659553] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19162c0): datao=0, datal=4096, cccid=0 00:15:05.853 [2024-07-12 12:39:31.659559] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1957940) on tqpair(0x19162c0): expected_datao=0, payload_size=4096 00:15:05.853 [2024-07-12 12:39:31.659564] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659572] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659576] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659585] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.853 [2024-07-12 12:39:31.659592] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.853 [2024-07-12 12:39:31.659595] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659600] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957940) on tqpair=0x19162c0 00:15:05.853 [2024-07-12 12:39:31.659610] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:05.853 [2024-07-12 12:39:31.659616] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:05.853 [2024-07-12 12:39:31.659621] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:05.853 [2024-07-12 12:39:31.659626] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:05.853 [2024-07-12 12:39:31.659631] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:05.853 [2024-07-12 12:39:31.659636] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:05.853 [2024-07-12 12:39:31.659645] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:05.853 [2024-07-12 12:39:31.659653] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659658] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19162c0) 00:15:05.853 [2024-07-12 12:39:31.659670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:05.853 [2024-07-12 12:39:31.659690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957940, cid 0, qid 0 00:15:05.853 [2024-07-12 12:39:31.659752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.853 [2024-07-12 12:39:31.659759] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.853 [2024-07-12 12:39:31.659763] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957940) on tqpair=0x19162c0 00:15:05.853 [2024-07-12 12:39:31.659784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659789] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659793] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19162c0) 00:15:05.853 [2024-07-12 12:39:31.659800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.853 [2024-07-12 12:39:31.659807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19162c0) 00:15:05.853 [2024-07-12 12:39:31.659820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.853 [2024-07-12 12:39:31.659827] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.853 [2024-07-12 12:39:31.659831] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.659834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19162c0) 00:15:05.854 [2024-07-12 12:39:31.659840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.854 [2024-07-12 12:39:31.659847] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.659851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.659854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19162c0) 00:15:05.854 [2024-07-12 12:39:31.659861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.854 [2024-07-12 12:39:31.659866] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:05.854 [2024-07-12 12:39:31.659879] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:05.854 [2024-07-12 12:39:31.659887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.659891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19162c0) 00:15:05.854 [2024-07-12 12:39:31.659898] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.854 [2024-07-12 12:39:31.659919] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957940, cid 0, qid 0 00:15:05.854 [2024-07-12 12:39:31.659926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957ac0, cid 1, qid 0 00:15:05.854 [2024-07-12 12:39:31.659931] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957c40, cid 2, qid 0 00:15:05.854 [2024-07-12 12:39:31.659936] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957dc0, cid 3, qid 0 00:15:05.854 [2024-07-12 12:39:31.659941] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957f40, cid 4, qid 0 00:15:05.854 [2024-07-12 12:39:31.660026] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.854 [2024-07-12 12:39:31.660033] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.854 [2024-07-12 12:39:31.660037] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957f40) on tqpair=0x19162c0 00:15:05.854 [2024-07-12 12:39:31.660047] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:05.854 [2024-07-12 12:39:31.660057] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:05.854 [2024-07-12 12:39:31.660069] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660074] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19162c0) 00:15:05.854 [2024-07-12 12:39:31.660081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.854 [2024-07-12 12:39:31.660099] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957f40, cid 4, qid 0 00:15:05.854 [2024-07-12 12:39:31.660156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.854 [2024-07-12 12:39:31.660163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.854 [2024-07-12 12:39:31.660167] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660171] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19162c0): datao=0, datal=4096, cccid=4 00:15:05.854 [2024-07-12 12:39:31.660176] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1957f40) on tqpair(0x19162c0): expected_datao=0, payload_size=4096 00:15:05.854 [2024-07-12 12:39:31.660181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660188] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660192] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.854 [2024-07-12 12:39:31.660207] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.854 [2024-07-12 12:39:31.660211] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660215] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957f40) on tqpair=0x19162c0 00:15:05.854 [2024-07-12 12:39:31.660229] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:05.854 [2024-07-12 12:39:31.660259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19162c0) 00:15:05.854 [2024-07-12 12:39:31.660272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.854 [2024-07-12 12:39:31.660280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660287] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19162c0) 00:15:05.854 [2024-07-12 12:39:31.660294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.854 [2024-07-12 12:39:31.660318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957f40, cid 4, qid 0 00:15:05.854 [2024-07-12 12:39:31.660325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19580c0, cid 5, qid 0 00:15:05.854 [2024-07-12 12:39:31.660449] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.854 [2024-07-12 12:39:31.660457] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.854 [2024-07-12 12:39:31.660461] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660465] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19162c0): datao=0, datal=1024, cccid=4 00:15:05.854 [2024-07-12 12:39:31.660471] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1957f40) on tqpair(0x19162c0): expected_datao=0, payload_size=1024 00:15:05.854 [2024-07-12 12:39:31.660475] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660482] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660486] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.854 [2024-07-12 12:39:31.660498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.854 [2024-07-12 12:39:31.660502] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19580c0) on tqpair=0x19162c0 00:15:05.854 [2024-07-12 12:39:31.660526] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.854 [2024-07-12 12:39:31.660534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.854 [2024-07-12 12:39:31.660538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957f40) on tqpair=0x19162c0 00:15:05.854 [2024-07-12 12:39:31.660555] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660559] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19162c0) 00:15:05.854 [2024-07-12 12:39:31.660567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.854 [2024-07-12 12:39:31.660592] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957f40, cid 4, qid 0 00:15:05.854 [2024-07-12 12:39:31.660660] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.854 [2024-07-12 12:39:31.660667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.854 [2024-07-12 12:39:31.660670] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660674] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19162c0): datao=0, datal=3072, cccid=4 00:15:05.854 [2024-07-12 12:39:31.660679] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1957f40) on tqpair(0x19162c0): expected_datao=0, payload_size=3072 00:15:05.854 [2024-07-12 12:39:31.660684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660691] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660695] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.854 [2024-07-12 12:39:31.660710] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.854 [2024-07-12 12:39:31.660714] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660718] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957f40) on tqpair=0x19162c0 00:15:05.854 [2024-07-12 12:39:31.660729] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.854 [2024-07-12 12:39:31.660734] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19162c0) 00:15:05.854 [2024-07-12 12:39:31.660741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.854 [2024-07-12 12:39:31.660764] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957f40, cid 4, qid 0 00:15:05.854 [2024-07-12 12:39:31.660829] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.854 ===================================================== 00:15:05.854 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:05.854 ===================================================== 00:15:05.854 Controller Capabilities/Features 00:15:05.854 ================================ 00:15:05.854 Vendor ID: 0000 00:15:05.854 Subsystem Vendor ID: 0000 00:15:05.854 Serial Number: .................... 00:15:05.854 Model Number: ........................................ 00:15:05.854 Firmware Version: 24.09 00:15:05.854 Recommended Arb Burst: 0 00:15:05.854 IEEE OUI Identifier: 00 00 00 00:15:05.854 Multi-path I/O 00:15:05.854 May have multiple subsystem ports: No 00:15:05.854 May have multiple controllers: No 00:15:05.854 Associated with SR-IOV VF: No 00:15:05.854 Max Data Transfer Size: 131072 00:15:05.854 Max Number of Namespaces: 0 00:15:05.854 Max Number of I/O Queues: 1024 00:15:05.854 NVMe Specification Version (VS): 1.3 00:15:05.854 NVMe Specification Version (Identify): 1.3 00:15:05.854 Maximum Queue Entries: 128 00:15:05.854 Contiguous Queues Required: Yes 00:15:05.854 Arbitration Mechanisms Supported 00:15:05.854 Weighted Round Robin: Not Supported 00:15:05.854 Vendor Specific: Not Supported 00:15:05.854 Reset Timeout: 15000 ms 00:15:05.854 Doorbell Stride: 4 bytes 00:15:05.854 NVM Subsystem Reset: Not Supported 00:15:05.854 Command Sets Supported 00:15:05.854 NVM Command Set: Supported 00:15:05.854 Boot Partition: Not Supported 00:15:05.854 Memory Page Size Minimum: 4096 bytes 00:15:05.854 Memory Page Size Maximum: 4096 bytes 00:15:05.854 Persistent Memory Region: Not Supported 00:15:05.854 Optional Asynchronous Events Supported 00:15:05.854 Namespace Attribute Notices: Not Supported 00:15:05.854 Firmware Activation Notices: Not Supported 00:15:05.854 ANA Change Notices: Not Supported 00:15:05.854 PLE Aggregate Log Change Notices: Not Supported 00:15:05.854 LBA Status Info Alert Notices: Not Supported 00:15:05.855 EGE Aggregate Log Change Notices: Not Supported 00:15:05.855 Normal NVM Subsystem Shutdown event: Not Supported 00:15:05.855 Zone Descriptor Change Notices: Not Supported 00:15:05.855 Discovery Log Change Notices: Supported 00:15:05.855 Controller Attributes 00:15:05.855 128-bit Host Identifier: Not Supported 00:15:05.855 Non-Operational Permissive Mode: Not Supported 00:15:05.855 NVM Sets: Not Supported 00:15:05.855 Read Recovery Levels: Not Supported 00:15:05.855 Endurance Groups: Not Supported 00:15:05.855 Predictable Latency Mode: Not Supported 00:15:05.855 Traffic Based Keep ALive: Not Supported 00:15:05.855 Namespace Granularity: Not Supported 00:15:05.855 SQ Associations: Not Supported 00:15:05.855 UUID List: Not Supported 00:15:05.855 Multi-Domain Subsystem: Not Supported 00:15:05.855 Fixed Capacity Management: Not Supported 00:15:05.855 Variable Capacity Management: Not Supported 00:15:05.855 Delete Endurance Group: Not Supported 00:15:05.855 Delete NVM Set: Not Supported 00:15:05.855 Extended LBA Formats Supported: Not Supported 00:15:05.855 Flexible Data Placement Supported: Not Supported 00:15:05.855 00:15:05.855 Controller Memory Buffer Support 00:15:05.855 ================================ 00:15:05.855 Supported: No 00:15:05.855 00:15:05.855 Persistent Memory Region Support 00:15:05.855 ================================ 00:15:05.855 Supported: No 00:15:05.855 00:15:05.855 Admin Command Set Attributes 00:15:05.855 ============================ 00:15:05.855 Security Send/Receive: Not Supported 00:15:05.855 Format NVM: Not Supported 00:15:05.855 Firmware Activate/Download: Not Supported 00:15:05.855 Namespace Management: Not Supported 00:15:05.855 Device Self-Test: Not Supported 00:15:05.855 Directives: Not Supported 00:15:05.855 NVMe-MI: Not Supported 00:15:05.855 Virtualization Management: Not Supported 00:15:05.855 Doorbell Buffer Config: Not Supported 00:15:05.855 Get LBA Status Capability: Not Supported 00:15:05.855 Command & Feature Lockdown Capability: Not Supported 00:15:05.855 Abort Command Limit: 1 00:15:05.855 Async Event Request Limit: 4 00:15:05.855 Number of Firmware Slots: N/A 00:15:05.855 Firmware Slot 1 Read-Only: N/A 00:15:05.855 Firm[2024-07-12 12:39:31.660836] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.855 [2024-07-12 12:39:31.660841] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.855 [2024-07-12 12:39:31.660845] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19162c0): datao=0, datal=8, cccid=4 00:15:05.855 [2024-07-12 12:39:31.660849] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1957f40) on tqpair(0x19162c0): expected_datao=0, payload_size=8 00:15:05.855 [2024-07-12 12:39:31.660854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.855 [2024-07-12 12:39:31.660861] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.855 [2024-07-12 12:39:31.660865] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.855 [2024-07-12 12:39:31.660881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.855 [2024-07-12 12:39:31.660888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.855 [2024-07-12 12:39:31.660892] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.855 [2024-07-12 12:39:31.660896] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957f40) on tqpair=0x19162c0 00:15:05.855 ware Activation Without Reset: N/A 00:15:05.855 Multiple Update Detection Support: N/A 00:15:05.855 Firmware Update Granularity: No Information Provided 00:15:05.855 Per-Namespace SMART Log: No 00:15:05.855 Asymmetric Namespace Access Log Page: Not Supported 00:15:05.855 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:05.855 Command Effects Log Page: Not Supported 00:15:05.855 Get Log Page Extended Data: Supported 00:15:05.855 Telemetry Log Pages: Not Supported 00:15:05.855 Persistent Event Log Pages: Not Supported 00:15:05.855 Supported Log Pages Log Page: May Support 00:15:05.855 Commands Supported & Effects Log Page: Not Supported 00:15:05.855 Feature Identifiers & Effects Log Page:May Support 00:15:05.855 NVMe-MI Commands & Effects Log Page: May Support 00:15:05.855 Data Area 4 for Telemetry Log: Not Supported 00:15:05.855 Error Log Page Entries Supported: 128 00:15:05.855 Keep Alive: Not Supported 00:15:05.855 00:15:05.855 NVM Command Set Attributes 00:15:05.855 ========================== 00:15:05.855 Submission Queue Entry Size 00:15:05.855 Max: 1 00:15:05.855 Min: 1 00:15:05.855 Completion Queue Entry Size 00:15:05.855 Max: 1 00:15:05.855 Min: 1 00:15:05.855 Number of Namespaces: 0 00:15:05.855 Compare Command: Not Supported 00:15:05.855 Write Uncorrectable Command: Not Supported 00:15:05.855 Dataset Management Command: Not Supported 00:15:05.855 Write Zeroes Command: Not Supported 00:15:05.855 Set Features Save Field: Not Supported 00:15:05.855 Reservations: Not Supported 00:15:05.855 Timestamp: Not Supported 00:15:05.855 Copy: Not Supported 00:15:05.855 Volatile Write Cache: Not Present 00:15:05.855 Atomic Write Unit (Normal): 1 00:15:05.855 Atomic Write Unit (PFail): 1 00:15:05.855 Atomic Compare & Write Unit: 1 00:15:05.855 Fused Compare & Write: Supported 00:15:05.855 Scatter-Gather List 00:15:05.855 SGL Command Set: Supported 00:15:05.855 SGL Keyed: Supported 00:15:05.855 SGL Bit Bucket Descriptor: Not Supported 00:15:05.855 SGL Metadata Pointer: Not Supported 00:15:05.855 Oversized SGL: Not Supported 00:15:05.855 SGL Metadata Address: Not Supported 00:15:05.855 SGL Offset: Supported 00:15:05.855 Transport SGL Data Block: Not Supported 00:15:05.855 Replay Protected Memory Block: Not Supported 00:15:05.855 00:15:05.855 Firmware Slot Information 00:15:05.855 ========================= 00:15:05.855 Active slot: 0 00:15:05.855 00:15:05.855 00:15:05.855 Error Log 00:15:05.855 ========= 00:15:05.855 00:15:05.855 Active Namespaces 00:15:05.855 ================= 00:15:05.855 Discovery Log Page 00:15:05.855 ================== 00:15:05.855 Generation Counter: 2 00:15:05.855 Number of Records: 2 00:15:05.855 Record Format: 0 00:15:05.855 00:15:05.855 Discovery Log Entry 0 00:15:05.855 ---------------------- 00:15:05.855 Transport Type: 3 (TCP) 00:15:05.855 Address Family: 1 (IPv4) 00:15:05.855 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:05.855 Entry Flags: 00:15:05.855 Duplicate Returned Information: 1 00:15:05.855 Explicit Persistent Connection Support for Discovery: 1 00:15:05.855 Transport Requirements: 00:15:05.855 Secure Channel: Not Required 00:15:05.855 Port ID: 0 (0x0000) 00:15:05.855 Controller ID: 65535 (0xffff) 00:15:05.855 Admin Max SQ Size: 128 00:15:05.855 Transport Service Identifier: 4420 00:15:05.855 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:05.855 Transport Address: 10.0.0.2 00:15:05.855 Discovery Log Entry 1 00:15:05.855 ---------------------- 00:15:05.855 Transport Type: 3 (TCP) 00:15:05.855 Address Family: 1 (IPv4) 00:15:05.855 Subsystem Type: 2 (NVM Subsystem) 00:15:05.855 Entry Flags: 00:15:05.855 Duplicate Returned Information: 0 00:15:05.855 Explicit Persistent Connection Support for Discovery: 0 00:15:05.855 Transport Requirements: 00:15:05.855 Secure Channel: Not Required 00:15:05.855 Port ID: 0 (0x0000) 00:15:05.855 Controller ID: 65535 (0xffff) 00:15:05.855 Admin Max SQ Size: 128 00:15:05.855 Transport Service Identifier: 4420 00:15:05.855 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:05.855 Transport Address: 10.0.0.2 [2024-07-12 12:39:31.660996] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:05.855 [2024-07-12 12:39:31.661010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957940) on tqpair=0x19162c0 00:15:05.855 [2024-07-12 12:39:31.661017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.855 [2024-07-12 12:39:31.661023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957ac0) on tqpair=0x19162c0 00:15:05.855 [2024-07-12 12:39:31.661028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.855 [2024-07-12 12:39:31.661033] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957c40) on tqpair=0x19162c0 00:15:05.855 [2024-07-12 12:39:31.661038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.855 [2024-07-12 12:39:31.661043] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957dc0) on tqpair=0x19162c0 00:15:05.855 [2024-07-12 12:39:31.661048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.855 [2024-07-12 12:39:31.661058] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.855 [2024-07-12 12:39:31.661062] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.855 [2024-07-12 12:39:31.661066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19162c0) 00:15:05.855 [2024-07-12 12:39:31.661074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.855 [2024-07-12 12:39:31.661096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957dc0, cid 3, qid 0 00:15:05.855 [2024-07-12 12:39:31.661145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.855 [2024-07-12 12:39:31.661152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.855 [2024-07-12 12:39:31.661156] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.855 [2024-07-12 12:39:31.661160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957dc0) on tqpair=0x19162c0 00:15:05.855 [2024-07-12 12:39:31.661168] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.855 [2024-07-12 12:39:31.661173] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19162c0) 00:15:05.856 [2024-07-12 12:39:31.661184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.856 [2024-07-12 12:39:31.661206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957dc0, cid 3, qid 0 00:15:05.856 [2024-07-12 12:39:31.661268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.856 [2024-07-12 12:39:31.661274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.856 [2024-07-12 12:39:31.661278] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957dc0) on tqpair=0x19162c0 00:15:05.856 [2024-07-12 12:39:31.661288] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:05.856 [2024-07-12 12:39:31.661293] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:05.856 [2024-07-12 12:39:31.661303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661308] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19162c0) 00:15:05.856 [2024-07-12 12:39:31.661319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.856 [2024-07-12 12:39:31.661336] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957dc0, cid 3, qid 0 00:15:05.856 [2024-07-12 12:39:31.661383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.856 [2024-07-12 12:39:31.661390] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.856 [2024-07-12 12:39:31.661393] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661398] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957dc0) on tqpair=0x19162c0 00:15:05.856 [2024-07-12 12:39:31.661425] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19162c0) 00:15:05.856 [2024-07-12 12:39:31.661443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.856 [2024-07-12 12:39:31.661463] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957dc0, cid 3, qid 0 00:15:05.856 [2024-07-12 12:39:31.661509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.856 [2024-07-12 12:39:31.661515] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.856 [2024-07-12 12:39:31.661519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957dc0) on tqpair=0x19162c0 00:15:05.856 [2024-07-12 12:39:31.661534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661539] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19162c0) 00:15:05.856 [2024-07-12 12:39:31.661551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.856 [2024-07-12 12:39:31.661568] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957dc0, cid 3, qid 0 00:15:05.856 [2024-07-12 12:39:31.661616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.856 [2024-07-12 12:39:31.661623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.856 [2024-07-12 12:39:31.661627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957dc0) on tqpair=0x19162c0 00:15:05.856 [2024-07-12 12:39:31.661642] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661646] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661651] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19162c0) 00:15:05.856 [2024-07-12 12:39:31.661658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.856 [2024-07-12 12:39:31.661675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957dc0, cid 3, qid 0 00:15:05.856 [2024-07-12 12:39:31.661721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.856 [2024-07-12 12:39:31.661727] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.856 [2024-07-12 12:39:31.661731] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957dc0) on tqpair=0x19162c0 00:15:05.856 [2024-07-12 12:39:31.661746] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661751] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19162c0) 00:15:05.856 [2024-07-12 12:39:31.661762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.856 [2024-07-12 12:39:31.661780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957dc0, cid 3, qid 0 00:15:05.856 [2024-07-12 12:39:31.661825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.856 [2024-07-12 12:39:31.661832] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.856 [2024-07-12 12:39:31.661836] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661840] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957dc0) on tqpair=0x19162c0 00:15:05.856 [2024-07-12 12:39:31.661851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661856] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661860] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19162c0) 00:15:05.856 [2024-07-12 12:39:31.661867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.856 [2024-07-12 12:39:31.661884] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957dc0, cid 3, qid 0 00:15:05.856 [2024-07-12 12:39:31.661932] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.856 [2024-07-12 12:39:31.661939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.856 [2024-07-12 12:39:31.661943] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661947] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957dc0) on tqpair=0x19162c0 00:15:05.856 [2024-07-12 12:39:31.661958] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.661966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19162c0) 00:15:05.856 [2024-07-12 12:39:31.661974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.856 [2024-07-12 12:39:31.661991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957dc0, cid 3, qid 0 00:15:05.856 [2024-07-12 12:39:31.662037] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.856 [2024-07-12 12:39:31.662043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.856 [2024-07-12 12:39:31.662047] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.662051] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957dc0) on tqpair=0x19162c0 00:15:05.856 [2024-07-12 12:39:31.662062] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.662067] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.662071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19162c0) 00:15:05.856 [2024-07-12 12:39:31.662078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.856 [2024-07-12 12:39:31.662095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957dc0, cid 3, qid 0 00:15:05.856 [2024-07-12 12:39:31.662140] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.856 [2024-07-12 12:39:31.662147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.856 [2024-07-12 12:39:31.662151] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.662156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957dc0) on tqpair=0x19162c0 00:15:05.856 [2024-07-12 12:39:31.662166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.662171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.662175] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19162c0) 00:15:05.856 [2024-07-12 12:39:31.662182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.856 [2024-07-12 12:39:31.662206] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957dc0, cid 3, qid 0 00:15:05.856 [2024-07-12 12:39:31.662252] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.856 [2024-07-12 12:39:31.662259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.856 [2024-07-12 12:39:31.662262] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.662267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957dc0) on tqpair=0x19162c0 00:15:05.856 [2024-07-12 12:39:31.662277] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.662282] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.662286] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19162c0) 00:15:05.856 [2024-07-12 12:39:31.662293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.856 [2024-07-12 12:39:31.662310] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957dc0, cid 3, qid 0 00:15:05.856 [2024-07-12 12:39:31.662356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.856 [2024-07-12 12:39:31.662363] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.856 [2024-07-12 12:39:31.662367] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.662371] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957dc0) on tqpair=0x19162c0 00:15:05.856 [2024-07-12 12:39:31.662382] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.662387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.662391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19162c0) 00:15:05.856 [2024-07-12 12:39:31.662398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.856 [2024-07-12 12:39:31.666449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1957dc0, cid 3, qid 0 00:15:05.856 [2024-07-12 12:39:31.666501] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.856 [2024-07-12 12:39:31.666509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.856 [2024-07-12 12:39:31.666513] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.856 [2024-07-12 12:39:31.666517] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1957dc0) on tqpair=0x19162c0 00:15:05.856 [2024-07-12 12:39:31.666526] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:15:05.856 00:15:05.857 12:39:31 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:05.857 [2024-07-12 12:39:31.708513] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:15:05.857 [2024-07-12 12:39:31.708572] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75095 ] 00:15:05.857 [2024-07-12 12:39:31.846097] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:05.857 [2024-07-12 12:39:31.846215] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:05.857 [2024-07-12 12:39:31.846224] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:05.857 [2024-07-12 12:39:31.846241] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:05.857 [2024-07-12 12:39:31.846250] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:15:05.857 [2024-07-12 12:39:31.850466] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:05.857 [2024-07-12 12:39:31.850539] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18942c0 0 00:15:05.857 [2024-07-12 12:39:31.858431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:05.857 [2024-07-12 12:39:31.858463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:05.857 [2024-07-12 12:39:31.858469] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:05.857 [2024-07-12 12:39:31.858473] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:05.857 [2024-07-12 12:39:31.858528] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.858536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.858540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18942c0) 00:15:05.857 [2024-07-12 12:39:31.858557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:05.857 [2024-07-12 12:39:31.858592] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5940, cid 0, qid 0 00:15:05.857 [2024-07-12 12:39:31.866425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.857 [2024-07-12 12:39:31.866455] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.857 [2024-07-12 12:39:31.866460] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.866466] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5940) on tqpair=0x18942c0 00:15:05.857 [2024-07-12 12:39:31.866480] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:05.857 [2024-07-12 12:39:31.866491] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:05.857 [2024-07-12 12:39:31.866499] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:05.857 [2024-07-12 12:39:31.866523] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.866528] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.866533] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18942c0) 00:15:05.857 [2024-07-12 12:39:31.866545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.857 [2024-07-12 12:39:31.866579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5940, cid 0, qid 0 00:15:05.857 [2024-07-12 12:39:31.866639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.857 [2024-07-12 12:39:31.866647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.857 [2024-07-12 12:39:31.866651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.866655] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5940) on tqpair=0x18942c0 00:15:05.857 [2024-07-12 12:39:31.866661] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:05.857 [2024-07-12 12:39:31.866669] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:05.857 [2024-07-12 12:39:31.866677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.866682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.866686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18942c0) 00:15:05.857 [2024-07-12 12:39:31.866693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.857 [2024-07-12 12:39:31.866712] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5940, cid 0, qid 0 00:15:05.857 [2024-07-12 12:39:31.866758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.857 [2024-07-12 12:39:31.866765] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.857 [2024-07-12 12:39:31.866769] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.866773] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5940) on tqpair=0x18942c0 00:15:05.857 [2024-07-12 12:39:31.866779] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:05.857 [2024-07-12 12:39:31.866789] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:05.857 [2024-07-12 12:39:31.866797] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.866801] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.866805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18942c0) 00:15:05.857 [2024-07-12 12:39:31.866812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.857 [2024-07-12 12:39:31.866831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5940, cid 0, qid 0 00:15:05.857 [2024-07-12 12:39:31.866882] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.857 [2024-07-12 12:39:31.866889] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.857 [2024-07-12 12:39:31.866893] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.866897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5940) on tqpair=0x18942c0 00:15:05.857 [2024-07-12 12:39:31.866903] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:05.857 [2024-07-12 12:39:31.866914] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.866919] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.866923] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18942c0) 00:15:05.857 [2024-07-12 12:39:31.866930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.857 [2024-07-12 12:39:31.866947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5940, cid 0, qid 0 00:15:05.857 [2024-07-12 12:39:31.866996] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.857 [2024-07-12 12:39:31.867003] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.857 [2024-07-12 12:39:31.867006] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.867011] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5940) on tqpair=0x18942c0 00:15:05.857 [2024-07-12 12:39:31.867016] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:05.857 [2024-07-12 12:39:31.867021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:05.857 [2024-07-12 12:39:31.867029] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:05.857 [2024-07-12 12:39:31.867136] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:05.857 [2024-07-12 12:39:31.867141] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:05.857 [2024-07-12 12:39:31.867151] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.867155] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.867159] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18942c0) 00:15:05.857 [2024-07-12 12:39:31.867166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.857 [2024-07-12 12:39:31.867194] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5940, cid 0, qid 0 00:15:05.857 [2024-07-12 12:39:31.867243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.857 [2024-07-12 12:39:31.867249] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.857 [2024-07-12 12:39:31.867253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.867258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5940) on tqpair=0x18942c0 00:15:05.857 [2024-07-12 12:39:31.867263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:05.857 [2024-07-12 12:39:31.867273] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.867278] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.857 [2024-07-12 12:39:31.867282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18942c0) 00:15:05.857 [2024-07-12 12:39:31.867289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.858 [2024-07-12 12:39:31.867316] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5940, cid 0, qid 0 00:15:05.858 [2024-07-12 12:39:31.867364] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.858 [2024-07-12 12:39:31.867371] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.858 [2024-07-12 12:39:31.867375] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5940) on tqpair=0x18942c0 00:15:05.858 [2024-07-12 12:39:31.867384] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:05.858 [2024-07-12 12:39:31.867390] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:05.858 [2024-07-12 12:39:31.867398] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:05.858 [2024-07-12 12:39:31.867421] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:05.858 [2024-07-12 12:39:31.867435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867440] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18942c0) 00:15:05.858 [2024-07-12 12:39:31.867448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.858 [2024-07-12 12:39:31.867469] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5940, cid 0, qid 0 00:15:05.858 [2024-07-12 12:39:31.867570] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.858 [2024-07-12 12:39:31.867577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.858 [2024-07-12 12:39:31.867581] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867585] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18942c0): datao=0, datal=4096, cccid=0 00:15:05.858 [2024-07-12 12:39:31.867591] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18d5940) on tqpair(0x18942c0): expected_datao=0, payload_size=4096 00:15:05.858 [2024-07-12 12:39:31.867596] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867605] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867610] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867620] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.858 [2024-07-12 12:39:31.867626] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.858 [2024-07-12 12:39:31.867630] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867634] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5940) on tqpair=0x18942c0 00:15:05.858 [2024-07-12 12:39:31.867645] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:05.858 [2024-07-12 12:39:31.867651] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:05.858 [2024-07-12 12:39:31.867656] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:05.858 [2024-07-12 12:39:31.867661] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:05.858 [2024-07-12 12:39:31.867666] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:05.858 [2024-07-12 12:39:31.867671] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:05.858 [2024-07-12 12:39:31.867682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:05.858 [2024-07-12 12:39:31.867690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867694] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18942c0) 00:15:05.858 [2024-07-12 12:39:31.867706] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:05.858 [2024-07-12 12:39:31.867727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5940, cid 0, qid 0 00:15:05.858 [2024-07-12 12:39:31.867777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.858 [2024-07-12 12:39:31.867784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.858 [2024-07-12 12:39:31.867788] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867792] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5940) on tqpair=0x18942c0 00:15:05.858 [2024-07-12 12:39:31.867801] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18942c0) 00:15:05.858 [2024-07-12 12:39:31.867817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.858 [2024-07-12 12:39:31.867824] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867832] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18942c0) 00:15:05.858 [2024-07-12 12:39:31.867838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.858 [2024-07-12 12:39:31.867845] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867849] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867853] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18942c0) 00:15:05.858 [2024-07-12 12:39:31.867859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.858 [2024-07-12 12:39:31.867866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867874] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.858 [2024-07-12 12:39:31.867880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.858 [2024-07-12 12:39:31.867885] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:05.858 [2024-07-12 12:39:31.867899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:05.858 [2024-07-12 12:39:31.867907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.867912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18942c0) 00:15:05.858 [2024-07-12 12:39:31.867919] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.858 [2024-07-12 12:39:31.867941] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5940, cid 0, qid 0 00:15:05.858 [2024-07-12 12:39:31.867948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5ac0, cid 1, qid 0 00:15:05.858 [2024-07-12 12:39:31.867953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5c40, cid 2, qid 0 00:15:05.858 [2024-07-12 12:39:31.867959] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.858 [2024-07-12 12:39:31.867964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5f40, cid 4, qid 0 00:15:05.858 [2024-07-12 12:39:31.868056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.858 [2024-07-12 12:39:31.868063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.858 [2024-07-12 12:39:31.868067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.868071] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5f40) on tqpair=0x18942c0 00:15:05.858 [2024-07-12 12:39:31.868076] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:05.858 [2024-07-12 12:39:31.868086] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:05.858 [2024-07-12 12:39:31.868096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:05.858 [2024-07-12 12:39:31.868103] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:05.858 [2024-07-12 12:39:31.868110] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.868114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.868118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18942c0) 00:15:05.858 [2024-07-12 12:39:31.868126] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:05.858 [2024-07-12 12:39:31.868143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5f40, cid 4, qid 0 00:15:05.858 [2024-07-12 12:39:31.868198] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.858 [2024-07-12 12:39:31.868205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.858 [2024-07-12 12:39:31.868209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.868213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5f40) on tqpair=0x18942c0 00:15:05.858 [2024-07-12 12:39:31.868277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:05.858 [2024-07-12 12:39:31.868289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:05.858 [2024-07-12 12:39:31.868298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.868302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18942c0) 00:15:05.858 [2024-07-12 12:39:31.868310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.858 [2024-07-12 12:39:31.868328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5f40, cid 4, qid 0 00:15:05.858 [2024-07-12 12:39:31.868387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.858 [2024-07-12 12:39:31.868394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.858 [2024-07-12 12:39:31.868399] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.868416] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18942c0): datao=0, datal=4096, cccid=4 00:15:05.858 [2024-07-12 12:39:31.868421] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18d5f40) on tqpair(0x18942c0): expected_datao=0, payload_size=4096 00:15:05.858 [2024-07-12 12:39:31.868426] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.868435] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.868439] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.868448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.858 [2024-07-12 12:39:31.868454] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.858 [2024-07-12 12:39:31.868458] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.858 [2024-07-12 12:39:31.868462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5f40) on tqpair=0x18942c0 00:15:05.858 [2024-07-12 12:39:31.868478] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:05.858 [2024-07-12 12:39:31.868490] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:05.858 [2024-07-12 12:39:31.868502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:05.859 [2024-07-12 12:39:31.868510] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.868515] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18942c0) 00:15:05.859 [2024-07-12 12:39:31.868522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.859 [2024-07-12 12:39:31.868544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5f40, cid 4, qid 0 00:15:05.859 [2024-07-12 12:39:31.868618] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.859 [2024-07-12 12:39:31.868625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.859 [2024-07-12 12:39:31.868629] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.868633] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18942c0): datao=0, datal=4096, cccid=4 00:15:05.859 [2024-07-12 12:39:31.868638] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18d5f40) on tqpair(0x18942c0): expected_datao=0, payload_size=4096 00:15:05.859 [2024-07-12 12:39:31.868642] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.868650] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.868654] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.868662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.859 [2024-07-12 12:39:31.868668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.859 [2024-07-12 12:39:31.868672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.868676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5f40) on tqpair=0x18942c0 00:15:05.859 [2024-07-12 12:39:31.868693] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:05.859 [2024-07-12 12:39:31.868704] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:05.859 [2024-07-12 12:39:31.868713] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.868718] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18942c0) 00:15:05.859 [2024-07-12 12:39:31.868725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.859 [2024-07-12 12:39:31.868746] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5f40, cid 4, qid 0 00:15:05.859 [2024-07-12 12:39:31.868808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.859 [2024-07-12 12:39:31.868815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.859 [2024-07-12 12:39:31.868819] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.868823] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18942c0): datao=0, datal=4096, cccid=4 00:15:05.859 [2024-07-12 12:39:31.868828] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18d5f40) on tqpair(0x18942c0): expected_datao=0, payload_size=4096 00:15:05.859 [2024-07-12 12:39:31.868832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.868839] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.868843] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.868852] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.859 [2024-07-12 12:39:31.868858] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.859 [2024-07-12 12:39:31.868862] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.868866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5f40) on tqpair=0x18942c0 00:15:05.859 [2024-07-12 12:39:31.868875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:05.859 [2024-07-12 12:39:31.868884] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:05.859 [2024-07-12 12:39:31.868895] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:05.859 [2024-07-12 12:39:31.868903] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:05.859 [2024-07-12 12:39:31.868908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:05.859 [2024-07-12 12:39:31.868914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:05.859 [2024-07-12 12:39:31.868920] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:05.859 [2024-07-12 12:39:31.868925] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:05.859 [2024-07-12 12:39:31.868931] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:05.859 [2024-07-12 12:39:31.868953] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.868958] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18942c0) 00:15:05.859 [2024-07-12 12:39:31.868966] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.859 [2024-07-12 12:39:31.868974] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.868979] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.868983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18942c0) 00:15:05.859 [2024-07-12 12:39:31.868989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:05.859 [2024-07-12 12:39:31.869014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5f40, cid 4, qid 0 00:15:05.859 [2024-07-12 12:39:31.869022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d60c0, cid 5, qid 0 00:15:05.859 [2024-07-12 12:39:31.869085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.859 [2024-07-12 12:39:31.869092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.859 [2024-07-12 12:39:31.869096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.869100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5f40) on tqpair=0x18942c0 00:15:05.859 [2024-07-12 12:39:31.869108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.859 [2024-07-12 12:39:31.869114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.859 [2024-07-12 12:39:31.869117] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.869121] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d60c0) on tqpair=0x18942c0 00:15:05.859 [2024-07-12 12:39:31.869132] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.869137] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18942c0) 00:15:05.859 [2024-07-12 12:39:31.869144] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.859 [2024-07-12 12:39:31.869162] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d60c0, cid 5, qid 0 00:15:05.859 [2024-07-12 12:39:31.869212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.859 [2024-07-12 12:39:31.869219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.859 [2024-07-12 12:39:31.869223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.869228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d60c0) on tqpair=0x18942c0 00:15:05.859 [2024-07-12 12:39:31.869239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.869243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18942c0) 00:15:05.859 [2024-07-12 12:39:31.869251] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.859 [2024-07-12 12:39:31.869267] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d60c0, cid 5, qid 0 00:15:05.859 [2024-07-12 12:39:31.869318] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.859 [2024-07-12 12:39:31.869325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.859 [2024-07-12 12:39:31.869328] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.869333] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d60c0) on tqpair=0x18942c0 00:15:05.859 [2024-07-12 12:39:31.869343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.869348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18942c0) 00:15:05.859 [2024-07-12 12:39:31.869355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.859 [2024-07-12 12:39:31.869372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d60c0, cid 5, qid 0 00:15:05.859 [2024-07-12 12:39:31.869439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.859 [2024-07-12 12:39:31.869448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.859 [2024-07-12 12:39:31.869452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.869456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d60c0) on tqpair=0x18942c0 00:15:05.859 [2024-07-12 12:39:31.869478] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.869484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18942c0) 00:15:05.859 [2024-07-12 12:39:31.869491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.859 [2024-07-12 12:39:31.869499] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.869503] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18942c0) 00:15:05.859 [2024-07-12 12:39:31.869510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.859 [2024-07-12 12:39:31.869519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.869523] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x18942c0) 00:15:05.859 [2024-07-12 12:39:31.869530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.859 [2024-07-12 12:39:31.869542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.869547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18942c0) 00:15:05.859 [2024-07-12 12:39:31.869553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.859 [2024-07-12 12:39:31.869576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d60c0, cid 5, qid 0 00:15:05.859 [2024-07-12 12:39:31.869583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5f40, cid 4, qid 0 00:15:05.859 [2024-07-12 12:39:31.869588] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d6240, cid 6, qid 0 00:15:05.859 [2024-07-12 12:39:31.869593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d63c0, cid 7, qid 0 00:15:05.859 [2024-07-12 12:39:31.869733] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.859 [2024-07-12 12:39:31.869741] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.859 [2024-07-12 12:39:31.869744] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.859 [2024-07-12 12:39:31.869748] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18942c0): datao=0, datal=8192, cccid=5 00:15:05.859 [2024-07-12 12:39:31.869753] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18d60c0) on tqpair(0x18942c0): expected_datao=0, payload_size=8192 00:15:05.860 [2024-07-12 12:39:31.869758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869775] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869780] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.860 [2024-07-12 12:39:31.869792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.860 [2024-07-12 12:39:31.869796] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869800] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18942c0): datao=0, datal=512, cccid=4 00:15:05.860 [2024-07-12 12:39:31.869804] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18d5f40) on tqpair(0x18942c0): expected_datao=0, payload_size=512 00:15:05.860 [2024-07-12 12:39:31.869809] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869816] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869820] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.860 [2024-07-12 12:39:31.869831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.860 [2024-07-12 12:39:31.869835] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869839] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18942c0): datao=0, datal=512, cccid=6 00:15:05.860 [2024-07-12 12:39:31.869844] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18d6240) on tqpair(0x18942c0): expected_datao=0, payload_size=512 00:15:05.860 [2024-07-12 12:39:31.869848] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869855] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869858] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869864] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:05.860 [2024-07-12 12:39:31.869870] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:05.860 [2024-07-12 12:39:31.869874] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869877] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18942c0): datao=0, datal=4096, cccid=7 00:15:05.860 [2024-07-12 12:39:31.869882] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18d63c0) on tqpair(0x18942c0): expected_datao=0, payload_size=4096 00:15:05.860 [2024-07-12 12:39:31.869886] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869894] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869897] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869906] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.860 [2024-07-12 12:39:31.869912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.860 [2024-07-12 12:39:31.869916] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869920] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d60c0) on tqpair=0x18942c0 00:15:05.860 [2024-07-12 12:39:31.869939] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.860 [2024-07-12 12:39:31.869946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.860 [2024-07-12 12:39:31.869950] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5f40) on tqpair=0x18942c0 00:15:05.860 [2024-07-12 12:39:31.869967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.860 [2024-07-12 12:39:31.869974] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.860 [2024-07-12 12:39:31.869977] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.869991] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d6240) on tqpair=0x18942c0 00:15:05.860 ===================================================== 00:15:05.860 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:05.860 ===================================================== 00:15:05.860 Controller Capabilities/Features 00:15:05.860 ================================ 00:15:05.860 Vendor ID: 8086 00:15:05.860 Subsystem Vendor ID: 8086 00:15:05.860 Serial Number: SPDK00000000000001 00:15:05.860 Model Number: SPDK bdev Controller 00:15:05.860 Firmware Version: 24.09 00:15:05.860 Recommended Arb Burst: 6 00:15:05.860 IEEE OUI Identifier: e4 d2 5c 00:15:05.860 Multi-path I/O 00:15:05.860 May have multiple subsystem ports: Yes 00:15:05.860 May have multiple controllers: Yes 00:15:05.860 Associated with SR-IOV VF: No 00:15:05.860 Max Data Transfer Size: 131072 00:15:05.860 Max Number of Namespaces: 32 00:15:05.860 Max Number of I/O Queues: 127 00:15:05.860 NVMe Specification Version (VS): 1.3 00:15:05.860 NVMe Specification Version (Identify): 1.3 00:15:05.860 Maximum Queue Entries: 128 00:15:05.860 Contiguous Queues Required: Yes 00:15:05.860 Arbitration Mechanisms Supported 00:15:05.860 Weighted Round Robin: Not Supported 00:15:05.860 Vendor Specific: Not Supported 00:15:05.860 Reset Timeout: 15000 ms 00:15:05.860 Doorbell Stride: 4 bytes 00:15:05.860 NVM Subsystem Reset: Not Supported 00:15:05.860 Command Sets Supported 00:15:05.860 NVM Command Set: Supported 00:15:05.860 Boot Partition: Not Supported 00:15:05.860 Memory Page Size Minimum: 4096 bytes 00:15:05.860 Memory Page Size Maximum: 4096 bytes 00:15:05.860 Persistent Memory Region: Not Supported 00:15:05.860 Optional Asynchronous Events Supported 00:15:05.860 Namespace Attribute Notices: Supported 00:15:05.860 Firmware Activation Notices: Not Supported 00:15:05.860 ANA Change Notices: Not Supported 00:15:05.860 PLE Aggregate Log Change Notices: Not Supported 00:15:05.860 LBA Status Info Alert Notices: Not Supported 00:15:05.860 EGE Aggregate Log Change Notices: Not Supported 00:15:05.860 Normal NVM Subsystem Shutdown event: Not Supported 00:15:05.860 Zone Descriptor Change Notices: Not Supported 00:15:05.860 Discovery Log Change Notices: Not Supported 00:15:05.860 Controller Attributes 00:15:05.860 128-bit Host Identifier: Supported 00:15:05.860 Non-Operational Permissive Mode: Not Supported 00:15:05.860 NVM Sets: Not Supported 00:15:05.860 Read Recovery Levels: Not Supported 00:15:05.860 Endurance Groups: Not Supported 00:15:05.860 Predictable Latency Mode: Not Supported 00:15:05.860 Traffic Based Keep ALive: Not Supported 00:15:05.860 Namespace Granularity: Not Supported 00:15:05.860 SQ Associations: Not Supported 00:15:05.860 UUID List: Not Supported 00:15:05.860 Multi-Domain Subsystem: Not Supported 00:15:05.860 Fixed Capacity Management: Not Supported 00:15:05.860 Variable Capacity Management: Not Supported 00:15:05.860 Delete Endurance Group: Not Supported 00:15:05.860 Delete NVM Set: Not Supported 00:15:05.860 Extended LBA Formats Supported: Not Supported 00:15:05.860 Flexible Data Placement Supported: Not Supported 00:15:05.860 00:15:05.860 Controller Memory Buffer Support 00:15:05.860 ================================ 00:15:05.860 Supported: No 00:15:05.860 00:15:05.860 Persistent Memory Region Support 00:15:05.860 ================================ 00:15:05.860 Supported: No 00:15:05.860 00:15:05.860 Admin Command Set Attributes 00:15:05.860 ============================ 00:15:05.860 Security Send/Receive: Not Supported 00:15:05.860 Format NVM: Not Supported 00:15:05.860 Firmware Activate/Download: Not Supported 00:15:05.860 Namespace Management: Not Supported 00:15:05.860 Device Self-Test: Not Supported 00:15:05.860 Directives: Not Supported 00:15:05.860 NVMe-MI: Not Supported 00:15:05.860 Virtualization Management: Not Supported 00:15:05.860 Doorbell Buffer Config: Not Supported 00:15:05.860 Get LBA Status Capability: Not Supported 00:15:05.860 Command & Feature Lockdown Capability: Not Supported 00:15:05.860 Abort Command Limit: 4 00:15:05.860 Async Event Request Limit: 4 00:15:05.860 Number of Firmware Slots: N/A 00:15:05.860 Firmware Slot 1 Read-Only: N/A 00:15:05.860 Firmware Activation Without Reset: [2024-07-12 12:39:31.869998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.860 [2024-07-12 12:39:31.870005] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.860 [2024-07-12 12:39:31.870009] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.860 [2024-07-12 12:39:31.870013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d63c0) on tqpair=0x18942c0 00:15:05.860 N/A 00:15:05.860 Multiple Update Detection Support: N/A 00:15:05.860 Firmware Update Granularity: No Information Provided 00:15:05.860 Per-Namespace SMART Log: No 00:15:05.860 Asymmetric Namespace Access Log Page: Not Supported 00:15:05.860 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:05.860 Command Effects Log Page: Supported 00:15:05.860 Get Log Page Extended Data: Supported 00:15:05.860 Telemetry Log Pages: Not Supported 00:15:05.860 Persistent Event Log Pages: Not Supported 00:15:05.860 Supported Log Pages Log Page: May Support 00:15:05.860 Commands Supported & Effects Log Page: Not Supported 00:15:05.860 Feature Identifiers & Effects Log Page:May Support 00:15:05.860 NVMe-MI Commands & Effects Log Page: May Support 00:15:05.860 Data Area 4 for Telemetry Log: Not Supported 00:15:05.860 Error Log Page Entries Supported: 128 00:15:05.860 Keep Alive: Supported 00:15:05.860 Keep Alive Granularity: 10000 ms 00:15:05.860 00:15:05.860 NVM Command Set Attributes 00:15:05.860 ========================== 00:15:05.860 Submission Queue Entry Size 00:15:05.860 Max: 64 00:15:05.860 Min: 64 00:15:05.860 Completion Queue Entry Size 00:15:05.860 Max: 16 00:15:05.860 Min: 16 00:15:05.860 Number of Namespaces: 32 00:15:05.860 Compare Command: Supported 00:15:05.860 Write Uncorrectable Command: Not Supported 00:15:05.860 Dataset Management Command: Supported 00:15:05.860 Write Zeroes Command: Supported 00:15:05.860 Set Features Save Field: Not Supported 00:15:05.860 Reservations: Supported 00:15:05.860 Timestamp: Not Supported 00:15:05.860 Copy: Supported 00:15:05.860 Volatile Write Cache: Present 00:15:05.860 Atomic Write Unit (Normal): 1 00:15:05.861 Atomic Write Unit (PFail): 1 00:15:05.861 Atomic Compare & Write Unit: 1 00:15:05.861 Fused Compare & Write: Supported 00:15:05.861 Scatter-Gather List 00:15:05.861 SGL Command Set: Supported 00:15:05.861 SGL Keyed: Supported 00:15:05.861 SGL Bit Bucket Descriptor: Not Supported 00:15:05.861 SGL Metadata Pointer: Not Supported 00:15:05.861 Oversized SGL: Not Supported 00:15:05.861 SGL Metadata Address: Not Supported 00:15:05.861 SGL Offset: Supported 00:15:05.861 Transport SGL Data Block: Not Supported 00:15:05.861 Replay Protected Memory Block: Not Supported 00:15:05.861 00:15:05.861 Firmware Slot Information 00:15:05.861 ========================= 00:15:05.861 Active slot: 1 00:15:05.861 Slot 1 Firmware Revision: 24.09 00:15:05.861 00:15:05.861 00:15:05.861 Commands Supported and Effects 00:15:05.861 ============================== 00:15:05.861 Admin Commands 00:15:05.861 -------------- 00:15:05.861 Get Log Page (02h): Supported 00:15:05.861 Identify (06h): Supported 00:15:05.861 Abort (08h): Supported 00:15:05.861 Set Features (09h): Supported 00:15:05.861 Get Features (0Ah): Supported 00:15:05.861 Asynchronous Event Request (0Ch): Supported 00:15:05.861 Keep Alive (18h): Supported 00:15:05.861 I/O Commands 00:15:05.861 ------------ 00:15:05.861 Flush (00h): Supported LBA-Change 00:15:05.861 Write (01h): Supported LBA-Change 00:15:05.861 Read (02h): Supported 00:15:05.861 Compare (05h): Supported 00:15:05.861 Write Zeroes (08h): Supported LBA-Change 00:15:05.861 Dataset Management (09h): Supported LBA-Change 00:15:05.861 Copy (19h): Supported LBA-Change 00:15:05.861 00:15:05.861 Error Log 00:15:05.861 ========= 00:15:05.861 00:15:05.861 Arbitration 00:15:05.861 =========== 00:15:05.861 Arbitration Burst: 1 00:15:05.861 00:15:05.861 Power Management 00:15:05.861 ================ 00:15:05.861 Number of Power States: 1 00:15:05.861 Current Power State: Power State #0 00:15:05.861 Power State #0: 00:15:05.861 Max Power: 0.00 W 00:15:05.861 Non-Operational State: Operational 00:15:05.861 Entry Latency: Not Reported 00:15:05.861 Exit Latency: Not Reported 00:15:05.861 Relative Read Throughput: 0 00:15:05.861 Relative Read Latency: 0 00:15:05.861 Relative Write Throughput: 0 00:15:05.861 Relative Write Latency: 0 00:15:05.861 Idle Power: Not Reported 00:15:05.861 Active Power: Not Reported 00:15:05.861 Non-Operational Permissive Mode: Not Supported 00:15:05.861 00:15:05.861 Health Information 00:15:05.861 ================== 00:15:05.861 Critical Warnings: 00:15:05.861 Available Spare Space: OK 00:15:05.861 Temperature: OK 00:15:05.861 Device Reliability: OK 00:15:05.861 Read Only: No 00:15:05.861 Volatile Memory Backup: OK 00:15:05.861 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:05.861 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:05.861 Available Spare: 0% 00:15:05.861 Available Spare Threshold: 0% 00:15:05.861 Life Percentage Used:[2024-07-12 12:39:31.870128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.861 [2024-07-12 12:39:31.870135] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x18942c0) 00:15:05.861 [2024-07-12 12:39:31.870143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.861 [2024-07-12 12:39:31.870165] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d63c0, cid 7, qid 0 00:15:05.861 [2024-07-12 12:39:31.870217] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.861 [2024-07-12 12:39:31.870224] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.861 [2024-07-12 12:39:31.870228] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.861 [2024-07-12 12:39:31.870232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d63c0) on tqpair=0x18942c0 00:15:05.861 [2024-07-12 12:39:31.870272] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:05.861 [2024-07-12 12:39:31.870283] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5940) on tqpair=0x18942c0 00:15:05.861 [2024-07-12 12:39:31.870290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.861 [2024-07-12 12:39:31.870296] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5ac0) on tqpair=0x18942c0 00:15:05.861 [2024-07-12 12:39:31.870301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.861 [2024-07-12 12:39:31.870306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5c40) on tqpair=0x18942c0 00:15:05.861 [2024-07-12 12:39:31.870311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.861 [2024-07-12 12:39:31.870316] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.861 [2024-07-12 12:39:31.870322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:05.861 [2024-07-12 12:39:31.870331] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.861 [2024-07-12 12:39:31.870336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.861 [2024-07-12 12:39:31.870340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.861 [2024-07-12 12:39:31.870347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.861 [2024-07-12 12:39:31.870369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.861 [2024-07-12 12:39:31.874423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.861 [2024-07-12 12:39:31.874441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.861 [2024-07-12 12:39:31.874446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.861 [2024-07-12 12:39:31.874450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.861 [2024-07-12 12:39:31.874460] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.861 [2024-07-12 12:39:31.874465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.861 [2024-07-12 12:39:31.874469] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.861 [2024-07-12 12:39:31.874477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.861 [2024-07-12 12:39:31.874507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.861 [2024-07-12 12:39:31.874577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.861 [2024-07-12 12:39:31.874584] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.861 [2024-07-12 12:39:31.874588] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.861 [2024-07-12 12:39:31.874592] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.861 [2024-07-12 12:39:31.874598] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:05.861 [2024-07-12 12:39:31.874603] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:05.861 [2024-07-12 12:39:31.874613] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.861 [2024-07-12 12:39:31.874618] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.861 [2024-07-12 12:39:31.874622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.861 [2024-07-12 12:39:31.874630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.861 [2024-07-12 12:39:31.874647] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.861 [2024-07-12 12:39:31.874693] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.861 [2024-07-12 12:39:31.874700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.861 [2024-07-12 12:39:31.874704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.861 [2024-07-12 12:39:31.874708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.861 [2024-07-12 12:39:31.874720] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.861 [2024-07-12 12:39:31.874725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.861 [2024-07-12 12:39:31.874729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.861 [2024-07-12 12:39:31.874736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.861 [2024-07-12 12:39:31.874753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.861 [2024-07-12 12:39:31.874799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.861 [2024-07-12 12:39:31.874806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.861 [2024-07-12 12:39:31.874810] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.874814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.862 [2024-07-12 12:39:31.874825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.874830] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.874833] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.862 [2024-07-12 12:39:31.874841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.862 [2024-07-12 12:39:31.874857] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.862 [2024-07-12 12:39:31.874902] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.862 [2024-07-12 12:39:31.874909] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.862 [2024-07-12 12:39:31.874913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.874917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.862 [2024-07-12 12:39:31.874928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.874933] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.874937] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.862 [2024-07-12 12:39:31.874944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.862 [2024-07-12 12:39:31.874961] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.862 [2024-07-12 12:39:31.875013] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.862 [2024-07-12 12:39:31.875020] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.862 [2024-07-12 12:39:31.875024] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875028] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.862 [2024-07-12 12:39:31.875039] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875044] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875048] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.862 [2024-07-12 12:39:31.875055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.862 [2024-07-12 12:39:31.875072] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.862 [2024-07-12 12:39:31.875123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.862 [2024-07-12 12:39:31.875129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.862 [2024-07-12 12:39:31.875134] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.862 [2024-07-12 12:39:31.875149] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875153] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875157] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.862 [2024-07-12 12:39:31.875165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.862 [2024-07-12 12:39:31.875182] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.862 [2024-07-12 12:39:31.875229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.862 [2024-07-12 12:39:31.875236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.862 [2024-07-12 12:39:31.875240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875244] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.862 [2024-07-12 12:39:31.875255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875264] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.862 [2024-07-12 12:39:31.875271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.862 [2024-07-12 12:39:31.875287] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.862 [2024-07-12 12:39:31.875345] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.862 [2024-07-12 12:39:31.875353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.862 [2024-07-12 12:39:31.875356] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.862 [2024-07-12 12:39:31.875372] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875376] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875380] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.862 [2024-07-12 12:39:31.875387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.862 [2024-07-12 12:39:31.875418] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.862 [2024-07-12 12:39:31.875472] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.862 [2024-07-12 12:39:31.875479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.862 [2024-07-12 12:39:31.875482] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.862 [2024-07-12 12:39:31.875498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.862 [2024-07-12 12:39:31.875514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.862 [2024-07-12 12:39:31.875533] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.862 [2024-07-12 12:39:31.875578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.862 [2024-07-12 12:39:31.875585] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.862 [2024-07-12 12:39:31.875589] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.862 [2024-07-12 12:39:31.875604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875613] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.862 [2024-07-12 12:39:31.875620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.862 [2024-07-12 12:39:31.875637] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.862 [2024-07-12 12:39:31.875688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.862 [2024-07-12 12:39:31.875695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.862 [2024-07-12 12:39:31.875699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875703] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.862 [2024-07-12 12:39:31.875714] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875718] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875723] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.862 [2024-07-12 12:39:31.875730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.862 [2024-07-12 12:39:31.875747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.862 [2024-07-12 12:39:31.875795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.862 [2024-07-12 12:39:31.875801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.862 [2024-07-12 12:39:31.875805] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.862 [2024-07-12 12:39:31.875820] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.862 [2024-07-12 12:39:31.875836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.862 [2024-07-12 12:39:31.875853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.862 [2024-07-12 12:39:31.875904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.862 [2024-07-12 12:39:31.875911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.862 [2024-07-12 12:39:31.875915] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875919] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.862 [2024-07-12 12:39:31.875929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875934] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.875938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.862 [2024-07-12 12:39:31.875945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.862 [2024-07-12 12:39:31.875962] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.862 [2024-07-12 12:39:31.876005] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.862 [2024-07-12 12:39:31.876012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.862 [2024-07-12 12:39:31.876015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.876020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.862 [2024-07-12 12:39:31.876030] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.876035] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.876039] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.862 [2024-07-12 12:39:31.876046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.862 [2024-07-12 12:39:31.876064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.862 [2024-07-12 12:39:31.876116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.862 [2024-07-12 12:39:31.876123] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.862 [2024-07-12 12:39:31.876126] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.876131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.862 [2024-07-12 12:39:31.876141] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.876146] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.862 [2024-07-12 12:39:31.876150] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.863 [2024-07-12 12:39:31.876157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.863 [2024-07-12 12:39:31.876174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.863 [2024-07-12 12:39:31.876222] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.863 [2024-07-12 12:39:31.876228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.863 [2024-07-12 12:39:31.876232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.863 [2024-07-12 12:39:31.876247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876252] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876255] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.863 [2024-07-12 12:39:31.876263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.863 [2024-07-12 12:39:31.876279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.863 [2024-07-12 12:39:31.876324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.863 [2024-07-12 12:39:31.876331] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.863 [2024-07-12 12:39:31.876335] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.863 [2024-07-12 12:39:31.876349] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.863 [2024-07-12 12:39:31.876365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.863 [2024-07-12 12:39:31.876382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.863 [2024-07-12 12:39:31.876442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.863 [2024-07-12 12:39:31.876450] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.863 [2024-07-12 12:39:31.876454] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.863 [2024-07-12 12:39:31.876469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876478] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.863 [2024-07-12 12:39:31.876485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.863 [2024-07-12 12:39:31.876503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.863 [2024-07-12 12:39:31.876554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.863 [2024-07-12 12:39:31.876561] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.863 [2024-07-12 12:39:31.876565] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876569] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.863 [2024-07-12 12:39:31.876580] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.863 [2024-07-12 12:39:31.876596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.863 [2024-07-12 12:39:31.876613] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.863 [2024-07-12 12:39:31.876658] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.863 [2024-07-12 12:39:31.876665] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.863 [2024-07-12 12:39:31.876669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876673] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.863 [2024-07-12 12:39:31.876684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876689] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.863 [2024-07-12 12:39:31.876700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.863 [2024-07-12 12:39:31.876717] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.863 [2024-07-12 12:39:31.876761] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.863 [2024-07-12 12:39:31.876768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.863 [2024-07-12 12:39:31.876772] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.863 [2024-07-12 12:39:31.876787] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876792] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.863 [2024-07-12 12:39:31.876803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.863 [2024-07-12 12:39:31.876820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.863 [2024-07-12 12:39:31.876869] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.863 [2024-07-12 12:39:31.876876] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.863 [2024-07-12 12:39:31.876879] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876884] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.863 [2024-07-12 12:39:31.876894] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876899] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876903] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.863 [2024-07-12 12:39:31.876910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.863 [2024-07-12 12:39:31.876927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.863 [2024-07-12 12:39:31.876971] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.863 [2024-07-12 12:39:31.876978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.863 [2024-07-12 12:39:31.876982] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.876986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.863 [2024-07-12 12:39:31.876997] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.877001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.877005] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.863 [2024-07-12 12:39:31.877012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.863 [2024-07-12 12:39:31.877029] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.863 [2024-07-12 12:39:31.877083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.863 [2024-07-12 12:39:31.877095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.863 [2024-07-12 12:39:31.877099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.877104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.863 [2024-07-12 12:39:31.877115] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.877120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.877124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.863 [2024-07-12 12:39:31.877132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.863 [2024-07-12 12:39:31.877149] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.863 [2024-07-12 12:39:31.877194] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.863 [2024-07-12 12:39:31.877201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.863 [2024-07-12 12:39:31.877205] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.877209] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.863 [2024-07-12 12:39:31.877220] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.877225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.877229] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.863 [2024-07-12 12:39:31.877236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.863 [2024-07-12 12:39:31.877253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.863 [2024-07-12 12:39:31.877304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.863 [2024-07-12 12:39:31.877311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.863 [2024-07-12 12:39:31.877315] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.877319] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.863 [2024-07-12 12:39:31.877329] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.877334] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.877338] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.863 [2024-07-12 12:39:31.877346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.863 [2024-07-12 12:39:31.877362] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.863 [2024-07-12 12:39:31.877424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.863 [2024-07-12 12:39:31.877432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.863 [2024-07-12 12:39:31.877435] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.877440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.863 [2024-07-12 12:39:31.877451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.877455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.863 [2024-07-12 12:39:31.877459] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.863 [2024-07-12 12:39:31.877467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.863 [2024-07-12 12:39:31.877486] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.863 [2024-07-12 12:39:31.877537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.863 [2024-07-12 12:39:31.877544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.863 [2024-07-12 12:39:31.877548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.877552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.864 [2024-07-12 12:39:31.877562] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.877567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.877571] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.864 [2024-07-12 12:39:31.877578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.864 [2024-07-12 12:39:31.877596] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.864 [2024-07-12 12:39:31.877640] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.864 [2024-07-12 12:39:31.877647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.864 [2024-07-12 12:39:31.877651] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.877655] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.864 [2024-07-12 12:39:31.877666] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.877671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.877675] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.864 [2024-07-12 12:39:31.877682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.864 [2024-07-12 12:39:31.877699] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.864 [2024-07-12 12:39:31.877744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.864 [2024-07-12 12:39:31.877751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.864 [2024-07-12 12:39:31.877755] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.877759] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.864 [2024-07-12 12:39:31.877770] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.877775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.877779] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.864 [2024-07-12 12:39:31.877786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.864 [2024-07-12 12:39:31.877803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.864 [2024-07-12 12:39:31.877854] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.864 [2024-07-12 12:39:31.877861] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.864 [2024-07-12 12:39:31.877864] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.877869] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.864 [2024-07-12 12:39:31.877879] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.877884] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.877888] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.864 [2024-07-12 12:39:31.877896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.864 [2024-07-12 12:39:31.877912] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.864 [2024-07-12 12:39:31.877963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.864 [2024-07-12 12:39:31.877970] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.864 [2024-07-12 12:39:31.877974] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.877978] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.864 [2024-07-12 12:39:31.877989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.877993] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.877997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.864 [2024-07-12 12:39:31.878005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.864 [2024-07-12 12:39:31.878021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.864 [2024-07-12 12:39:31.878072] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.864 [2024-07-12 12:39:31.878079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.864 [2024-07-12 12:39:31.878083] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.878087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.864 [2024-07-12 12:39:31.878098] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.878103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.878106] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.864 [2024-07-12 12:39:31.878114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.864 [2024-07-12 12:39:31.878130] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.864 [2024-07-12 12:39:31.878183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.864 [2024-07-12 12:39:31.878195] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.864 [2024-07-12 12:39:31.878199] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.878203] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.864 [2024-07-12 12:39:31.878215] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.878220] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.878224] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.864 [2024-07-12 12:39:31.878231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.864 [2024-07-12 12:39:31.878249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.864 [2024-07-12 12:39:31.878301] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.864 [2024-07-12 12:39:31.878308] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.864 [2024-07-12 12:39:31.878312] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.878317] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.864 [2024-07-12 12:39:31.878327] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.878332] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.878336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.864 [2024-07-12 12:39:31.878344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.864 [2024-07-12 12:39:31.878361] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.864 [2024-07-12 12:39:31.882427] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.864 [2024-07-12 12:39:31.882447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.864 [2024-07-12 12:39:31.882451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.882456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.864 [2024-07-12 12:39:31.882471] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.882476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.882481] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18942c0) 00:15:05.864 [2024-07-12 12:39:31.882490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:05.864 [2024-07-12 12:39:31.882516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18d5dc0, cid 3, qid 0 00:15:05.864 [2024-07-12 12:39:31.882565] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:05.864 [2024-07-12 12:39:31.882572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:05.864 [2024-07-12 12:39:31.882576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:05.864 [2024-07-12 12:39:31.882580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18d5dc0) on tqpair=0x18942c0 00:15:05.864 [2024-07-12 12:39:31.882589] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:15:05.864 0% 00:15:05.864 Data Units Read: 0 00:15:05.864 Data Units Written: 0 00:15:05.864 Host Read Commands: 0 00:15:05.864 Host Write Commands: 0 00:15:05.864 Controller Busy Time: 0 minutes 00:15:05.864 Power Cycles: 0 00:15:05.864 Power On Hours: 0 hours 00:15:05.864 Unsafe Shutdowns: 0 00:15:05.864 Unrecoverable Media Errors: 0 00:15:05.864 Lifetime Error Log Entries: 0 00:15:05.864 Warning Temperature Time: 0 minutes 00:15:05.864 Critical Temperature Time: 0 minutes 00:15:05.864 00:15:05.864 Number of Queues 00:15:05.864 ================ 00:15:05.864 Number of I/O Submission Queues: 127 00:15:05.864 Number of I/O Completion Queues: 127 00:15:05.864 00:15:05.864 Active Namespaces 00:15:05.864 ================= 00:15:05.864 Namespace ID:1 00:15:05.864 Error Recovery Timeout: Unlimited 00:15:05.864 Command Set Identifier: NVM (00h) 00:15:05.864 Deallocate: Supported 00:15:05.864 Deallocated/Unwritten Error: Not Supported 00:15:05.864 Deallocated Read Value: Unknown 00:15:05.864 Deallocate in Write Zeroes: Not Supported 00:15:05.864 Deallocated Guard Field: 0xFFFF 00:15:05.864 Flush: Supported 00:15:05.864 Reservation: Supported 00:15:05.864 Namespace Sharing Capabilities: Multiple Controllers 00:15:05.864 Size (in LBAs): 131072 (0GiB) 00:15:05.864 Capacity (in LBAs): 131072 (0GiB) 00:15:05.864 Utilization (in LBAs): 131072 (0GiB) 00:15:05.864 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:05.864 EUI64: ABCDEF0123456789 00:15:05.864 UUID: bfd2732e-e22f-4749-aa45-9056bba5a938 00:15:05.864 Thin Provisioning: Not Supported 00:15:05.864 Per-NS Atomic Units: Yes 00:15:05.864 Atomic Boundary Size (Normal): 0 00:15:05.864 Atomic Boundary Size (PFail): 0 00:15:05.864 Atomic Boundary Offset: 0 00:15:05.864 Maximum Single Source Range Length: 65535 00:15:05.864 Maximum Copy Length: 65535 00:15:05.864 Maximum Source Range Count: 1 00:15:05.864 NGUID/EUI64 Never Reused: No 00:15:05.864 Namespace Write Protected: No 00:15:05.864 Number of LBA Formats: 1 00:15:05.864 Current LBA Format: LBA Format #00 00:15:05.864 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:05.864 00:15:05.865 12:39:31 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:15:06.153 12:39:31 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:06.153 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.153 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:06.153 12:39:31 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.153 12:39:31 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:06.153 12:39:31 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:15:06.153 12:39:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:06.153 12:39:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:15:06.153 12:39:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:06.153 12:39:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:15:06.153 12:39:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:06.153 12:39:31 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:06.153 rmmod nvme_tcp 00:15:06.153 rmmod nvme_fabrics 00:15:06.153 rmmod nvme_keyring 00:15:06.153 12:39:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:06.153 12:39:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:15:06.153 12:39:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:15:06.153 12:39:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 75052 ']' 00:15:06.153 12:39:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 75052 00:15:06.153 12:39:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 75052 ']' 00:15:06.153 12:39:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 75052 00:15:06.153 12:39:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:15:06.153 12:39:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:06.153 12:39:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75052 00:15:06.153 killing process with pid 75052 00:15:06.153 12:39:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:06.153 12:39:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:06.153 12:39:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75052' 00:15:06.153 12:39:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 75052 00:15:06.153 12:39:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 75052 00:15:06.456 12:39:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:06.456 12:39:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:06.456 12:39:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:06.456 12:39:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.456 12:39:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:06.456 12:39:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.456 12:39:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.456 12:39:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.456 12:39:32 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:06.456 00:15:06.456 real 0m2.566s 00:15:06.456 user 0m7.139s 00:15:06.456 sys 0m0.681s 00:15:06.456 12:39:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:06.456 12:39:32 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:15:06.456 ************************************ 00:15:06.456 END TEST nvmf_identify 00:15:06.456 ************************************ 00:15:06.456 12:39:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:06.456 12:39:32 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:06.456 12:39:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:06.456 12:39:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.456 12:39:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:06.456 ************************************ 00:15:06.456 START TEST nvmf_perf 00:15:06.456 ************************************ 00:15:06.456 12:39:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:06.456 * Looking for test storage... 00:15:06.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:06.456 12:39:32 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:06.456 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:15:06.456 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.456 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.456 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.456 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.456 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:06.457 12:39:32 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:06.715 Cannot find device "nvmf_tgt_br" 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:06.715 Cannot find device "nvmf_tgt_br2" 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:06.715 Cannot find device "nvmf_tgt_br" 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:06.715 Cannot find device "nvmf_tgt_br2" 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:06.715 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:06.715 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:06.715 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:07.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:07.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:15:07.061 00:15:07.061 --- 10.0.0.2 ping statistics --- 00:15:07.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.061 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:07.061 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:07.061 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:07.061 00:15:07.061 --- 10.0.0.3 ping statistics --- 00:15:07.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.061 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:07.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:07.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:15:07.061 00:15:07.061 --- 10.0.0.1 ping statistics --- 00:15:07.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.061 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=75261 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 75261 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 75261 ']' 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.061 12:39:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:07.061 [2024-07-12 12:39:32.914158] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:15:07.061 [2024-07-12 12:39:32.914269] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.061 [2024-07-12 12:39:33.057220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:07.319 [2024-07-12 12:39:33.188606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:07.319 [2024-07-12 12:39:33.188704] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:07.319 [2024-07-12 12:39:33.188720] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:07.319 [2024-07-12 12:39:33.188732] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:07.319 [2024-07-12 12:39:33.188741] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:07.319 [2024-07-12 12:39:33.190120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.319 [2024-07-12 12:39:33.190304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:07.319 [2024-07-12 12:39:33.190428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:07.319 [2024-07-12 12:39:33.190525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.319 [2024-07-12 12:39:33.250804] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:07.912 12:39:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:07.912 12:39:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:15:07.912 12:39:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:07.912 12:39:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:07.912 12:39:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:07.912 12:39:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:07.912 12:39:33 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:07.912 12:39:33 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:08.481 12:39:34 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:08.481 12:39:34 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:08.740 12:39:34 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:15:08.741 12:39:34 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:09.000 12:39:34 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:09.000 12:39:34 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:15:09.000 12:39:34 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:09.000 12:39:34 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:09.000 12:39:34 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:09.259 [2024-07-12 12:39:35.183334] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.259 12:39:35 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:09.517 12:39:35 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:09.517 12:39:35 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:09.775 12:39:35 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:09.775 12:39:35 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:10.033 12:39:35 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.291 [2024-07-12 12:39:36.120543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.291 12:39:36 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:10.549 12:39:36 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:15:10.549 12:39:36 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:10.549 12:39:36 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:10.549 12:39:36 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:15:11.484 Initializing NVMe Controllers 00:15:11.484 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:15:11.484 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:15:11.484 Initialization complete. Launching workers. 00:15:11.484 ======================================================== 00:15:11.484 Latency(us) 00:15:11.484 Device Information : IOPS MiB/s Average min max 00:15:11.484 PCIE (0000:00:10.0) NSID 1 from core 0: 23830.74 93.09 1342.96 361.60 7642.52 00:15:11.484 ======================================================== 00:15:11.484 Total : 23830.74 93.09 1342.96 361.60 7642.52 00:15:11.484 00:15:11.484 12:39:37 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:12.858 Initializing NVMe Controllers 00:15:12.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:12.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:12.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:12.858 Initialization complete. Launching workers. 00:15:12.858 ======================================================== 00:15:12.858 Latency(us) 00:15:12.858 Device Information : IOPS MiB/s Average min max 00:15:12.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3627.40 14.17 275.38 102.30 4319.82 00:15:12.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.50 0.48 8160.29 7915.44 12081.71 00:15:12.858 ======================================================== 00:15:12.858 Total : 3750.90 14.65 535.00 102.30 12081.71 00:15:12.858 00:15:12.858 12:39:38 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:14.244 Initializing NVMe Controllers 00:15:14.244 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:14.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:14.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:14.244 Initialization complete. Launching workers. 00:15:14.244 ======================================================== 00:15:14.244 Latency(us) 00:15:14.244 Device Information : IOPS MiB/s Average min max 00:15:14.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8612.16 33.64 3716.39 597.61 7900.81 00:15:14.244 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4006.14 15.65 7998.49 6601.63 9389.03 00:15:14.244 ======================================================== 00:15:14.244 Total : 12618.30 49.29 5075.90 597.61 9389.03 00:15:14.244 00:15:14.244 12:39:40 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:14.244 12:39:40 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:16.771 Initializing NVMe Controllers 00:15:16.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:16.771 Controller IO queue size 128, less than required. 00:15:16.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:16.771 Controller IO queue size 128, less than required. 00:15:16.771 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:16.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:16.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:16.771 Initialization complete. Launching workers. 00:15:16.771 ======================================================== 00:15:16.771 Latency(us) 00:15:16.771 Device Information : IOPS MiB/s Average min max 00:15:16.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1510.92 377.73 85466.10 47705.67 145443.27 00:15:16.771 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 635.20 158.80 207205.83 65199.76 339509.39 00:15:16.771 ======================================================== 00:15:16.771 Total : 2146.12 536.53 121498.34 47705.67 339509.39 00:15:16.771 00:15:16.771 12:39:42 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:17.029 Initializing NVMe Controllers 00:15:17.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:17.029 Controller IO queue size 128, less than required. 00:15:17.029 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:17.029 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:17.030 Controller IO queue size 128, less than required. 00:15:17.030 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:17.030 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:17.030 WARNING: Some requested NVMe devices were skipped 00:15:17.030 No valid NVMe controllers or AIO or URING devices found 00:15:17.030 12:39:43 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:19.559 Initializing NVMe Controllers 00:15:19.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:19.559 Controller IO queue size 128, less than required. 00:15:19.559 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:19.559 Controller IO queue size 128, less than required. 00:15:19.559 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:19.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:19.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:19.559 Initialization complete. Launching workers. 00:15:19.559 00:15:19.559 ==================== 00:15:19.559 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:19.559 TCP transport: 00:15:19.559 polls: 7809 00:15:19.559 idle_polls: 4775 00:15:19.559 sock_completions: 3034 00:15:19.559 nvme_completions: 5975 00:15:19.559 submitted_requests: 9012 00:15:19.559 queued_requests: 1 00:15:19.559 00:15:19.559 ==================== 00:15:19.559 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:19.559 TCP transport: 00:15:19.559 polls: 7752 00:15:19.559 idle_polls: 4129 00:15:19.559 sock_completions: 3623 00:15:19.559 nvme_completions: 6261 00:15:19.559 submitted_requests: 9312 00:15:19.559 queued_requests: 1 00:15:19.559 ======================================================== 00:15:19.559 Latency(us) 00:15:19.559 Device Information : IOPS MiB/s Average min max 00:15:19.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1493.34 373.34 88018.93 41607.88 145450.88 00:15:19.559 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1564.84 391.21 82046.73 44086.85 115990.40 00:15:19.559 ======================================================== 00:15:19.559 Total : 3058.18 764.55 84963.03 41607.88 145450.88 00:15:19.559 00:15:19.559 12:39:45 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:15:19.816 12:39:45 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:20.073 rmmod nvme_tcp 00:15:20.073 rmmod nvme_fabrics 00:15:20.073 rmmod nvme_keyring 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 75261 ']' 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 75261 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 75261 ']' 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 75261 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:20.073 12:39:45 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75261 00:15:20.073 killing process with pid 75261 00:15:20.073 12:39:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:20.073 12:39:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:20.073 12:39:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75261' 00:15:20.073 12:39:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 75261 00:15:20.073 12:39:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 75261 00:15:20.639 12:39:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:20.639 12:39:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:20.639 12:39:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:20.639 12:39:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:20.639 12:39:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:20.639 12:39:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.639 12:39:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.639 12:39:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.639 12:39:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:20.639 00:15:20.639 real 0m14.225s 00:15:20.639 user 0m52.083s 00:15:20.639 sys 0m4.268s 00:15:20.639 12:39:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:20.639 12:39:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:15:20.639 ************************************ 00:15:20.639 END TEST nvmf_perf 00:15:20.639 ************************************ 00:15:20.639 12:39:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:20.639 12:39:46 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:20.639 12:39:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:20.639 12:39:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:20.639 12:39:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:20.639 ************************************ 00:15:20.639 START TEST nvmf_fio_host 00:15:20.639 ************************************ 00:15:20.639 12:39:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:20.898 * Looking for test storage... 00:15:20.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:20.898 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:20.899 Cannot find device "nvmf_tgt_br" 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:20.899 Cannot find device "nvmf_tgt_br2" 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:20.899 Cannot find device "nvmf_tgt_br" 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:20.899 Cannot find device "nvmf_tgt_br2" 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:20.899 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:20.899 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:20.899 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:21.157 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:21.157 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:21.157 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:21.157 12:39:46 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:21.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:21.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:15:21.157 00:15:21.157 --- 10.0.0.2 ping statistics --- 00:15:21.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.157 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:21.157 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:21.157 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:15:21.157 00:15:21.157 --- 10.0.0.3 ping statistics --- 00:15:21.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.157 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:21.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:21.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:21.157 00:15:21.157 --- 10.0.0.1 ping statistics --- 00:15:21.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:21.157 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=75669 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 75669 00:15:21.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 75669 ']' 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:21.157 12:39:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:21.157 [2024-07-12 12:39:47.204013] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:15:21.157 [2024-07-12 12:39:47.204267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.416 [2024-07-12 12:39:47.340588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:21.416 [2024-07-12 12:39:47.447646] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.416 [2024-07-12 12:39:47.447978] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.416 [2024-07-12 12:39:47.448137] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.416 [2024-07-12 12:39:47.448298] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.416 [2024-07-12 12:39:47.448339] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.416 [2024-07-12 12:39:47.448558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.416 [2024-07-12 12:39:47.448671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.416 [2024-07-12 12:39:47.448756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:21.416 [2024-07-12 12:39:47.448756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.673 [2024-07-12 12:39:47.507728] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:22.238 12:39:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:22.238 12:39:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:15:22.238 12:39:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:22.495 [2024-07-12 12:39:48.424897] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.495 12:39:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:22.495 12:39:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:22.495 12:39:48 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:22.495 12:39:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:22.754 Malloc1 00:15:22.754 12:39:48 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:23.013 12:39:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:23.271 12:39:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.530 [2024-07-12 12:39:49.580033] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.530 12:39:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:24.095 12:39:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:24.095 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:24.095 fio-3.35 00:15:24.095 Starting 1 thread 00:15:26.622 00:15:26.622 test: (groupid=0, jobs=1): err= 0: pid=75752: Fri Jul 12 12:39:52 2024 00:15:26.622 read: IOPS=8938, BW=34.9MiB/s (36.6MB/s)(70.1MiB/2007msec) 00:15:26.622 slat (usec): min=2, max=346, avg= 2.53, stdev= 3.14 00:15:26.622 clat (usec): min=2195, max=13996, avg=7438.17, stdev=528.40 00:15:26.622 lat (usec): min=2235, max=13998, avg=7440.70, stdev=528.11 00:15:26.622 clat percentiles (usec): 00:15:26.622 | 1.00th=[ 6390], 5.00th=[ 6718], 10.00th=[ 6849], 20.00th=[ 7046], 00:15:26.622 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7504], 00:15:26.622 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8225], 00:15:26.622 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[11994], 99.95th=[13829], 00:15:26.622 | 99.99th=[13960] 00:15:26.623 bw ( KiB/s): min=35152, max=36272, per=100.00%, avg=35754.00, stdev=467.00, samples=4 00:15:26.623 iops : min= 8788, max= 9068, avg=8938.50, stdev=116.75, samples=4 00:15:26.623 write: IOPS=8957, BW=35.0MiB/s (36.7MB/s)(70.2MiB/2007msec); 0 zone resets 00:15:26.623 slat (usec): min=2, max=211, avg= 2.68, stdev= 1.91 00:15:26.623 clat (usec): min=2068, max=13407, avg=6795.35, stdev=480.05 00:15:26.623 lat (usec): min=2079, max=13410, avg=6798.03, stdev=479.91 00:15:26.623 clat percentiles (usec): 00:15:26.623 | 1.00th=[ 5866], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6456], 00:15:26.623 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6783], 60.00th=[ 6849], 00:15:26.623 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7439], 00:15:26.623 | 99.00th=[ 7898], 99.50th=[ 8029], 99.90th=[11338], 99.95th=[12125], 00:15:26.623 | 99.99th=[13304] 00:15:26.623 bw ( KiB/s): min=35320, max=36216, per=99.99%, avg=35826.00, stdev=373.03, samples=4 00:15:26.623 iops : min= 8830, max= 9054, avg=8956.50, stdev=93.26, samples=4 00:15:26.623 lat (msec) : 4=0.15%, 10=99.67%, 20=0.19% 00:15:26.623 cpu : usr=69.39%, sys=22.63%, ctx=30, majf=0, minf=7 00:15:26.623 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:26.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:26.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:26.623 issued rwts: total=17939,17978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:26.623 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:26.623 00:15:26.623 Run status group 0 (all jobs): 00:15:26.623 READ: bw=34.9MiB/s (36.6MB/s), 34.9MiB/s-34.9MiB/s (36.6MB/s-36.6MB/s), io=70.1MiB (73.5MB), run=2007-2007msec 00:15:26.623 WRITE: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.2MiB (73.6MB), run=2007-2007msec 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:26.623 12:39:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:26.623 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:26.623 fio-3.35 00:15:26.623 Starting 1 thread 00:15:29.150 00:15:29.150 test: (groupid=0, jobs=1): err= 0: pid=75795: Fri Jul 12 12:39:54 2024 00:15:29.150 read: IOPS=8235, BW=129MiB/s (135MB/s)(259MiB/2011msec) 00:15:29.150 slat (usec): min=3, max=125, avg= 3.77, stdev= 1.71 00:15:29.150 clat (usec): min=2101, max=18636, avg=8592.68, stdev=2571.25 00:15:29.150 lat (usec): min=2104, max=18639, avg=8596.45, stdev=2571.27 00:15:29.150 clat percentiles (usec): 00:15:29.150 | 1.00th=[ 4146], 5.00th=[ 5014], 10.00th=[ 5473], 20.00th=[ 6259], 00:15:29.150 | 30.00th=[ 6980], 40.00th=[ 7635], 50.00th=[ 8291], 60.00th=[ 9110], 00:15:29.150 | 70.00th=[10028], 80.00th=[10683], 90.00th=[11863], 95.00th=[13173], 00:15:29.150 | 99.00th=[15401], 99.50th=[16057], 99.90th=[18482], 99.95th=[18482], 00:15:29.150 | 99.99th=[18744] 00:15:29.150 bw ( KiB/s): min=60416, max=78240, per=52.08%, avg=68627.75, stdev=8181.23, samples=4 00:15:29.150 iops : min= 3776, max= 4890, avg=4289.00, stdev=511.18, samples=4 00:15:29.150 write: IOPS=4869, BW=76.1MiB/s (79.8MB/s)(140MiB/1846msec); 0 zone resets 00:15:29.150 slat (usec): min=35, max=712, avg=38.23, stdev=11.24 00:15:29.150 clat (usec): min=5405, max=19938, avg=12018.90, stdev=2208.32 00:15:29.150 lat (usec): min=5442, max=19975, avg=12057.13, stdev=2208.12 00:15:29.150 clat percentiles (usec): 00:15:29.150 | 1.00th=[ 7963], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10159], 00:15:29.150 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11731], 60.00th=[12256], 00:15:29.150 | 70.00th=[12780], 80.00th=[13698], 90.00th=[15270], 95.00th=[16319], 00:15:29.150 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19792], 99.95th=[19792], 00:15:29.150 | 99.99th=[20055] 00:15:29.150 bw ( KiB/s): min=63040, max=80704, per=91.75%, avg=71482.25, stdev=8070.84, samples=4 00:15:29.150 iops : min= 3940, max= 5044, avg=4467.50, stdev=504.33, samples=4 00:15:29.150 lat (msec) : 4=0.46%, 10=50.86%, 20=48.68% 00:15:29.150 cpu : usr=81.94%, sys=13.53%, ctx=19, majf=0, minf=10 00:15:29.150 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:29.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:29.150 issued rwts: total=16562,8989,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:29.150 00:15:29.150 Run status group 0 (all jobs): 00:15:29.151 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=259MiB (271MB), run=2011-2011msec 00:15:29.151 WRITE: bw=76.1MiB/s (79.8MB/s), 76.1MiB/s-76.1MiB/s (79.8MB/s-79.8MB/s), io=140MiB (147MB), run=1846-1846msec 00:15:29.151 12:39:54 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.151 rmmod nvme_tcp 00:15:29.151 rmmod nvme_fabrics 00:15:29.151 rmmod nvme_keyring 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 75669 ']' 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 75669 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 75669 ']' 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 75669 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 75669 00:15:29.151 killing process with pid 75669 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 75669' 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 75669 00:15:29.151 12:39:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 75669 00:15:29.717 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.717 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:29.717 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:29.717 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.718 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.718 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.718 12:39:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.718 12:39:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.718 12:39:55 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:29.718 ************************************ 00:15:29.718 END TEST nvmf_fio_host 00:15:29.718 ************************************ 00:15:29.718 00:15:29.718 real 0m8.852s 00:15:29.718 user 0m36.261s 00:15:29.718 sys 0m2.364s 00:15:29.718 12:39:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:29.718 12:39:55 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:15:29.718 12:39:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:29.718 12:39:55 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:29.718 12:39:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:29.718 12:39:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:29.718 12:39:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:29.718 ************************************ 00:15:29.718 START TEST nvmf_failover 00:15:29.718 ************************************ 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:29.718 * Looking for test storage... 00:15:29.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:29.718 Cannot find device "nvmf_tgt_br" 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:29.718 Cannot find device "nvmf_tgt_br2" 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:29.718 Cannot find device "nvmf_tgt_br" 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:29.718 Cannot find device "nvmf_tgt_br2" 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:15:29.718 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:29.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:29.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:29.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:15:29.976 00:15:29.976 --- 10.0.0.2 ping statistics --- 00:15:29.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.976 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:15:29.976 12:39:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:29.976 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:29.976 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:15:29.976 00:15:29.976 --- 10.0.0.3 ping statistics --- 00:15:29.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.976 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:29.976 12:39:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:29.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:29.976 00:15:29.976 --- 10.0.0.1 ping statistics --- 00:15:29.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.976 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:29.976 12:39:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.976 12:39:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:15:29.976 12:39:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:29.976 12:39:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.976 12:39:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:29.976 12:39:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:29.977 12:39:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.977 12:39:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:29.977 12:39:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:29.977 12:39:56 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:29.977 12:39:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.977 12:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:29.977 12:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:29.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.977 12:39:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=76017 00:15:29.977 12:39:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:29.977 12:39:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 76017 00:15:29.977 12:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76017 ']' 00:15:29.977 12:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.977 12:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.977 12:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.977 12:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.977 12:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:30.343 [2024-07-12 12:39:56.095586] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:15:30.343 [2024-07-12 12:39:56.095699] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.343 [2024-07-12 12:39:56.237460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:30.343 [2024-07-12 12:39:56.348411] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.343 [2024-07-12 12:39:56.348754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.343 [2024-07-12 12:39:56.348892] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.343 [2024-07-12 12:39:56.349025] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.343 [2024-07-12 12:39:56.349061] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.343 [2024-07-12 12:39:56.349318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.343 [2024-07-12 12:39:56.349387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.343 [2024-07-12 12:39:56.349386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.624 [2024-07-12 12:39:56.405305] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:31.188 12:39:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.188 12:39:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:31.188 12:39:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:31.188 12:39:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:31.188 12:39:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:31.188 12:39:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:31.188 12:39:57 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:31.446 [2024-07-12 12:39:57.345886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.446 12:39:57 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:31.703 Malloc0 00:15:31.703 12:39:57 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:31.959 12:39:57 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:32.216 12:39:58 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:32.472 [2024-07-12 12:39:58.374171] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:32.472 12:39:58 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:32.728 [2024-07-12 12:39:58.610363] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:32.728 12:39:58 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:32.985 [2024-07-12 12:39:58.846603] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:32.985 12:39:58 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=76069 00:15:32.985 12:39:58 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:32.985 12:39:58 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:32.985 12:39:58 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 76069 /var/tmp/bdevperf.sock 00:15:32.985 12:39:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76069 ']' 00:15:32.985 12:39:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:32.985 12:39:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:32.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:32.985 12:39:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:32.985 12:39:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:32.985 12:39:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:33.917 12:39:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:33.917 12:39:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:33.917 12:39:59 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:34.174 NVMe0n1 00:15:34.174 12:40:00 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:34.739 00:15:34.739 12:40:00 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=76097 00:15:34.739 12:40:00 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:34.739 12:40:00 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:35.685 12:40:01 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.685 [2024-07-12 12:40:01.755880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a85950 is same with the state(5) to be set 00:15:35.685 [2024-07-12 12:40:01.755957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a85950 is same with the state(5) to be set 00:15:35.685 [2024-07-12 12:40:01.755971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a85950 is same with the state(5) to be set 00:15:35.942 12:40:01 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:39.216 12:40:04 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:39.216 00:15:39.217 12:40:05 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:39.475 12:40:05 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:42.754 12:40:08 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.754 [2024-07-12 12:40:08.631743] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.754 12:40:08 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:43.687 12:40:09 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:43.946 12:40:09 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 76097 00:15:50.568 0 00:15:50.568 12:40:15 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 76069 00:15:50.568 12:40:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76069 ']' 00:15:50.568 12:40:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76069 00:15:50.568 12:40:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:15:50.569 12:40:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:50.569 12:40:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76069 00:15:50.569 12:40:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:50.569 killing process with pid 76069 00:15:50.569 12:40:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:50.569 12:40:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76069' 00:15:50.569 12:40:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76069 00:15:50.569 12:40:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76069 00:15:50.569 12:40:15 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:50.569 [2024-07-12 12:39:58.906900] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:15:50.569 [2024-07-12 12:39:58.907006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76069 ] 00:15:50.569 [2024-07-12 12:39:59.044256] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.569 [2024-07-12 12:39:59.172974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.569 [2024-07-12 12:39:59.233251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:50.569 Running I/O for 15 seconds... 00:15:50.569 [2024-07-12 12:40:01.756331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.569 [2024-07-12 12:40:01.756401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.569 [2024-07-12 12:40:01.756446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.569 [2024-07-12 12:40:01.756495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.569 [2024-07-12 12:40:01.756525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.569 [2024-07-12 12:40:01.756555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.569 [2024-07-12 12:40:01.756585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.569 [2024-07-12 12:40:01.756614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.569 [2024-07-12 12:40:01.756643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.569 [2024-07-12 12:40:01.756673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.569 [2024-07-12 12:40:01.756701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.569 [2024-07-12 12:40:01.756764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.569 [2024-07-12 12:40:01.756795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.569 [2024-07-12 12:40:01.756824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.569 [2024-07-12 12:40:01.756853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.569 [2024-07-12 12:40:01.756882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.569 [2024-07-12 12:40:01.756920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.569 [2024-07-12 12:40:01.756949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.569 [2024-07-12 12:40:01.756977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.756993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.569 [2024-07-12 12:40:01.757007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.757024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.569 [2024-07-12 12:40:01.757038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.757054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.569 [2024-07-12 12:40:01.757068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.757083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.569 [2024-07-12 12:40:01.757097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.757112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.569 [2024-07-12 12:40:01.757126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.757149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.569 [2024-07-12 12:40:01.757164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.757179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.569 [2024-07-12 12:40:01.757193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.757208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.569 [2024-07-12 12:40:01.757221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.757237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.569 [2024-07-12 12:40:01.757251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.757266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.569 [2024-07-12 12:40:01.757280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.757296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.569 [2024-07-12 12:40:01.757309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.757325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.569 [2024-07-12 12:40:01.757338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.757354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.569 [2024-07-12 12:40:01.757367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.757382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.569 [2024-07-12 12:40:01.757396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.569 [2024-07-12 12:40:01.757423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.570 [2024-07-12 12:40:01.757438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.570 [2024-07-12 12:40:01.757467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.570 [2024-07-12 12:40:01.757496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.570 [2024-07-12 12:40:01.757526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.757564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.757593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.757622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.757651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.757680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.757709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.757738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.757767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.570 [2024-07-12 12:40:01.757796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.570 [2024-07-12 12:40:01.757825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.570 [2024-07-12 12:40:01.757854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.570 [2024-07-12 12:40:01.757883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.570 [2024-07-12 12:40:01.757918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.570 [2024-07-12 12:40:01.757948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.570 [2024-07-12 12:40:01.757977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.757995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.570 [2024-07-12 12:40:01.758009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.570 [2024-07-12 12:40:01.758702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.570 [2024-07-12 12:40:01.758716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.758732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.758746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.758762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.571 [2024-07-12 12:40:01.758776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.758792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.571 [2024-07-12 12:40:01.758805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.758821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.571 [2024-07-12 12:40:01.758835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.758850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.571 [2024-07-12 12:40:01.758863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.758879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.571 [2024-07-12 12:40:01.758893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.758908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.571 [2024-07-12 12:40:01.758922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.758937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.571 [2024-07-12 12:40:01.758951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.758967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.571 [2024-07-12 12:40:01.758980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.758996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.759010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.759039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.759075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.759104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.759134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.759170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.759199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.759229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.759259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.759288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.759329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.759361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.759390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.759435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.759464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.571 [2024-07-12 12:40:01.759501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.571 [2024-07-12 12:40:01.759530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.571 [2024-07-12 12:40:01.759559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.571 [2024-07-12 12:40:01.759597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.571 [2024-07-12 12:40:01.759626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.571 [2024-07-12 12:40:01.759655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.571 [2024-07-12 12:40:01.759690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.571 [2024-07-12 12:40:01.759719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcab7c0 is same with the state(5) to be set 00:15:50.571 [2024-07-12 12:40:01.759753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.571 [2024-07-12 12:40:01.759765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.571 [2024-07-12 12:40:01.759776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76408 len:8 PRP1 0x0 PRP2 0x0 00:15:50.571 [2024-07-12 12:40:01.759789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.571 [2024-07-12 12:40:01.759814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.571 [2024-07-12 12:40:01.759825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76928 len:8 PRP1 0x0 PRP2 0x0 00:15:50.571 [2024-07-12 12:40:01.759838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.571 [2024-07-12 12:40:01.759861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.571 [2024-07-12 12:40:01.759878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76936 len:8 PRP1 0x0 PRP2 0x0 00:15:50.571 [2024-07-12 12:40:01.759892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.571 [2024-07-12 12:40:01.759916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.571 [2024-07-12 12:40:01.759926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76944 len:8 PRP1 0x0 PRP2 0x0 00:15:50.571 [2024-07-12 12:40:01.759939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.759953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.571 [2024-07-12 12:40:01.759963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.571 [2024-07-12 12:40:01.759973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76952 len:8 PRP1 0x0 PRP2 0x0 00:15:50.571 [2024-07-12 12:40:01.759986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.760000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.571 [2024-07-12 12:40:01.760010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.571 [2024-07-12 12:40:01.760021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76960 len:8 PRP1 0x0 PRP2 0x0 00:15:50.571 [2024-07-12 12:40:01.760033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.571 [2024-07-12 12:40:01.760047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.572 [2024-07-12 12:40:01.760056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.572 [2024-07-12 12:40:01.760067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76968 len:8 PRP1 0x0 PRP2 0x0 00:15:50.572 [2024-07-12 12:40:01.760086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.572 [2024-07-12 12:40:01.760110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.572 [2024-07-12 12:40:01.760120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76976 len:8 PRP1 0x0 PRP2 0x0 00:15:50.572 [2024-07-12 12:40:01.760133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.572 [2024-07-12 12:40:01.760158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.572 [2024-07-12 12:40:01.760168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76984 len:8 PRP1 0x0 PRP2 0x0 00:15:50.572 [2024-07-12 12:40:01.760181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.572 [2024-07-12 12:40:01.760205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.572 [2024-07-12 12:40:01.760215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76992 len:8 PRP1 0x0 PRP2 0x0 00:15:50.572 [2024-07-12 12:40:01.760228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.572 [2024-07-12 12:40:01.760256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.572 [2024-07-12 12:40:01.760267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77000 len:8 PRP1 0x0 PRP2 0x0 00:15:50.572 [2024-07-12 12:40:01.760281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.572 [2024-07-12 12:40:01.760304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.572 [2024-07-12 12:40:01.760314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77008 len:8 PRP1 0x0 PRP2 0x0 00:15:50.572 [2024-07-12 12:40:01.760328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.572 [2024-07-12 12:40:01.760351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.572 [2024-07-12 12:40:01.760361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77016 len:8 PRP1 0x0 PRP2 0x0 00:15:50.572 [2024-07-12 12:40:01.760374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.572 [2024-07-12 12:40:01.760398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.572 [2024-07-12 12:40:01.760420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77024 len:8 PRP1 0x0 PRP2 0x0 00:15:50.572 [2024-07-12 12:40:01.760433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.572 [2024-07-12 12:40:01.760457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.572 [2024-07-12 12:40:01.760467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77032 len:8 PRP1 0x0 PRP2 0x0 00:15:50.572 [2024-07-12 12:40:01.760486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760500] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.572 [2024-07-12 12:40:01.760510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.572 [2024-07-12 12:40:01.760521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77040 len:8 PRP1 0x0 PRP2 0x0 00:15:50.572 [2024-07-12 12:40:01.760534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.572 [2024-07-12 12:40:01.760557] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.572 [2024-07-12 12:40:01.760567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77048 len:8 PRP1 0x0 PRP2 0x0 00:15:50.572 [2024-07-12 12:40:01.760581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.572 [2024-07-12 12:40:01.760603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.572 [2024-07-12 12:40:01.760613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77056 len:8 PRP1 0x0 PRP2 0x0 00:15:50.572 [2024-07-12 12:40:01.760627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.572 [2024-07-12 12:40:01.760657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.572 [2024-07-12 12:40:01.760668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77064 len:8 PRP1 0x0 PRP2 0x0 00:15:50.572 [2024-07-12 12:40:01.760680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.572 [2024-07-12 12:40:01.760703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.572 [2024-07-12 12:40:01.760713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77072 len:8 PRP1 0x0 PRP2 0x0 00:15:50.572 [2024-07-12 12:40:01.760726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.572 [2024-07-12 12:40:01.760749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.572 [2024-07-12 12:40:01.760759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77080 len:8 PRP1 0x0 PRP2 0x0 00:15:50.572 [2024-07-12 12:40:01.760772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760854] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcab7c0 was disconnected and freed. reset controller. 00:15:50.572 [2024-07-12 12:40:01.760873] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:50.572 [2024-07-12 12:40:01.760935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.572 [2024-07-12 12:40:01.760956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.572 [2024-07-12 12:40:01.760984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.760998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.572 [2024-07-12 12:40:01.761017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.761032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.572 [2024-07-12 12:40:01.761045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:01.761058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:50.572 [2024-07-12 12:40:01.761163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5a570 (9): Bad file descriptor 00:15:50.572 [2024-07-12 12:40:01.765136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:50.572 [2024-07-12 12:40:01.806513] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:50.572 [2024-07-12 12:40:05.399343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.572 [2024-07-12 12:40:05.399432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:05.399494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.572 [2024-07-12 12:40:05.399513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:05.399529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.572 [2024-07-12 12:40:05.399543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:05.399558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.572 [2024-07-12 12:40:05.399572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:05.399587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.572 [2024-07-12 12:40:05.399601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:05.399616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.572 [2024-07-12 12:40:05.399630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:05.399645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.572 [2024-07-12 12:40:05.399658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:05.399674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.572 [2024-07-12 12:40:05.399687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:05.399702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.572 [2024-07-12 12:40:05.399716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:05.399732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.572 [2024-07-12 12:40:05.399745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.572 [2024-07-12 12:40:05.399760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.572 [2024-07-12 12:40:05.399774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.399789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.399802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.399817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.399831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.399846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.399860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.399883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.399898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.399913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.399926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.399941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-07-12 12:40:05.399955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.399971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-07-12 12:40:05.399987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-07-12 12:40:05.400016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-07-12 12:40:05.400046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-07-12 12:40:05.400075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-07-12 12:40:05.400104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-07-12 12:40:05.400133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-07-12 12:40:05.400161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.400191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.400219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.400254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.400284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.400312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.400341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.400370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.400399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.400443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.400472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.400501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.400530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.400558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.400587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.400616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.573 [2024-07-12 12:40:05.400654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-07-12 12:40:05.400684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-07-12 12:40:05.400713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-07-12 12:40:05.400743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-07-12 12:40:05.400772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.573 [2024-07-12 12:40:05.400787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.573 [2024-07-12 12:40:05.400801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.400816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.400830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.400845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.400859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.400874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.400887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.400903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.400916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.400931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.400945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.400960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.400973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.400989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.401153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.401182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.401212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.401241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.401270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.401299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.401327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.401356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.401391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.401432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.401461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.401494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.401523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.401552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.401581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.574 [2024-07-12 12:40:05.401611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.401982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.401996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.402011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.402025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.402040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.402053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.574 [2024-07-12 12:40:05.402069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.574 [2024-07-12 12:40:05.402083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-07-12 12:40:05.402363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-07-12 12:40:05.402391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-07-12 12:40:05.402432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-07-12 12:40:05.402461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-07-12 12:40:05.402490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-07-12 12:40:05.402519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-07-12 12:40:05.402555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-07-12 12:40:05.402583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.402973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.402986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.403001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.403015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.403030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.575 [2024-07-12 12:40:05.403043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.403058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-07-12 12:40:05.403072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.403087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-07-12 12:40:05.403100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.403115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-07-12 12:40:05.403128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.403143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-07-12 12:40:05.403157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.403172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-07-12 12:40:05.403185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.403200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-07-12 12:40:05.403213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.403229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.575 [2024-07-12 12:40:05.403244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.403291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.575 [2024-07-12 12:40:05.403312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.575 [2024-07-12 12:40:05.403337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86416 len:8 PRP1 0x0 PRP2 0x0 00:15:50.575 [2024-07-12 12:40:05.403351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.403430] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcdcd30 was disconnected and freed. reset controller. 00:15:50.575 [2024-07-12 12:40:05.403450] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:15:50.575 [2024-07-12 12:40:05.403506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.575 [2024-07-12 12:40:05.403526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.575 [2024-07-12 12:40:05.403541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.576 [2024-07-12 12:40:05.403554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:05.403568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.576 [2024-07-12 12:40:05.403581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:05.403596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.576 [2024-07-12 12:40:05.403609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:05.403622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:50.576 [2024-07-12 12:40:05.403656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5a570 (9): Bad file descriptor 00:15:50.576 [2024-07-12 12:40:05.407475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:50.576 [2024-07-12 12:40:05.444810] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:50.576 [2024-07-12 12:40:09.915631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.576 [2024-07-12 12:40:09.915703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.915723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.576 [2024-07-12 12:40:09.915737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.915752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.576 [2024-07-12 12:40:09.915780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.915794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:50.576 [2024-07-12 12:40:09.915807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.915820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5a570 is same with the state(5) to be set 00:15:50.576 [2024-07-12 12:40:09.916627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.576 [2024-07-12 12:40:09.916689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.916716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.576 [2024-07-12 12:40:09.916732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.916747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.576 [2024-07-12 12:40:09.916761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.916776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.576 [2024-07-12 12:40:09.916790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.916805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.576 [2024-07-12 12:40:09.916817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.916832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.576 [2024-07-12 12:40:09.916846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.916861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.576 [2024-07-12 12:40:09.916874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.916889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.576 [2024-07-12 12:40:09.916903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.916919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.916932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.916947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.916961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.916976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.916990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.917018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.917047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.917085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.917113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.917141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.917170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.917198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.917226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.917254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.917282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.917310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.917338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.917366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.576 [2024-07-12 12:40:09.917394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.576 [2024-07-12 12:40:09.917446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.576 [2024-07-12 12:40:09.917476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.576 [2024-07-12 12:40:09.917504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.576 [2024-07-12 12:40:09.917532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.576 [2024-07-12 12:40:09.917564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.576 [2024-07-12 12:40:09.917593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.576 [2024-07-12 12:40:09.917622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.576 [2024-07-12 12:40:09.917651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.576 [2024-07-12 12:40:09.917666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.577 [2024-07-12 12:40:09.917680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.917695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.577 [2024-07-12 12:40:09.917708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.917724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.577 [2024-07-12 12:40:09.917737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.917752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.577 [2024-07-12 12:40:09.917766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.917781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.577 [2024-07-12 12:40:09.917795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.917810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.577 [2024-07-12 12:40:09.917903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.917921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.577 [2024-07-12 12:40:09.917935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.917950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.917964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.917979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.917992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.577 [2024-07-12 12:40:09.918196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.577 [2024-07-12 12:40:09.918224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.577 [2024-07-12 12:40:09.918253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.577 [2024-07-12 12:40:09.918289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.577 [2024-07-12 12:40:09.918318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.577 [2024-07-12 12:40:09.918347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.577 [2024-07-12 12:40:09.918375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.577 [2024-07-12 12:40:09.918416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:33056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.577 [2024-07-12 12:40:09.918845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.577 [2024-07-12 12:40:09.918858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.918873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.918887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.918902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.918915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.918930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.918944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.918959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.918972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.918987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.919396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.919443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.919472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.919500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.919529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.919558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.919587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.919615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.919643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.919672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.919700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.919728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.919756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.919790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.919820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:50.578 [2024-07-12 12:40:09.919848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.919976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.919990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.920006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.920019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.920034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:32824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:50.578 [2024-07-12 12:40:09.920048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.920062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdbdd0 is same with the state(5) to be set 00:15:50.578 [2024-07-12 12:40:09.920078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.578 [2024-07-12 12:40:09.920088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.578 [2024-07-12 12:40:09.920099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32832 len:8 PRP1 0x0 PRP2 0x0 00:15:50.578 [2024-07-12 12:40:09.920112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.578 [2024-07-12 12:40:09.920126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.578 [2024-07-12 12:40:09.920136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.579 [2024-07-12 12:40:09.920152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33288 len:8 PRP1 0x0 PRP2 0x0 00:15:50.579 [2024-07-12 12:40:09.920166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.579 [2024-07-12 12:40:09.920180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.579 [2024-07-12 12:40:09.920189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.579 [2024-07-12 12:40:09.920199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33296 len:8 PRP1 0x0 PRP2 0x0 00:15:50.579 [2024-07-12 12:40:09.920212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.579 [2024-07-12 12:40:09.920225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.579 [2024-07-12 12:40:09.920235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.579 [2024-07-12 12:40:09.920244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33304 len:8 PRP1 0x0 PRP2 0x0 00:15:50.579 [2024-07-12 12:40:09.920257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.579 [2024-07-12 12:40:09.920270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.579 [2024-07-12 12:40:09.920279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.579 [2024-07-12 12:40:09.920289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33312 len:8 PRP1 0x0 PRP2 0x0 00:15:50.579 [2024-07-12 12:40:09.920302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.579 [2024-07-12 12:40:09.920315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.579 [2024-07-12 12:40:09.920324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.579 [2024-07-12 12:40:09.920334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33320 len:8 PRP1 0x0 PRP2 0x0 00:15:50.579 [2024-07-12 12:40:09.920347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.579 [2024-07-12 12:40:09.920361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.579 [2024-07-12 12:40:09.920370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.579 [2024-07-12 12:40:09.920381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33328 len:8 PRP1 0x0 PRP2 0x0 00:15:50.579 [2024-07-12 12:40:09.920394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.579 [2024-07-12 12:40:09.920421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.579 [2024-07-12 12:40:09.920432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.579 [2024-07-12 12:40:09.920443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33336 len:8 PRP1 0x0 PRP2 0x0 00:15:50.579 [2024-07-12 12:40:09.920456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.579 [2024-07-12 12:40:09.920469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.579 [2024-07-12 12:40:09.920479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.579 [2024-07-12 12:40:09.920489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33344 len:8 PRP1 0x0 PRP2 0x0 00:15:50.579 [2024-07-12 12:40:09.920503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.579 [2024-07-12 12:40:09.920516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.579 [2024-07-12 12:40:09.920532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.579 [2024-07-12 12:40:09.920543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33352 len:8 PRP1 0x0 PRP2 0x0 00:15:50.579 [2024-07-12 12:40:09.920556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.579 [2024-07-12 12:40:09.920570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.579 [2024-07-12 12:40:09.920580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.579 [2024-07-12 12:40:09.920590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33360 len:8 PRP1 0x0 PRP2 0x0 00:15:50.579 [2024-07-12 12:40:09.920603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.579 [2024-07-12 12:40:09.920617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.579 [2024-07-12 12:40:09.920627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.579 [2024-07-12 12:40:09.920638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33368 len:8 PRP1 0x0 PRP2 0x0 00:15:50.579 [2024-07-12 12:40:09.920651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.579 [2024-07-12 12:40:09.920665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.579 [2024-07-12 12:40:09.920674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.579 [2024-07-12 12:40:09.920685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33376 len:8 PRP1 0x0 PRP2 0x0 00:15:50.579 [2024-07-12 12:40:09.920698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.579 [2024-07-12 12:40:09.920711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.579 [2024-07-12 12:40:09.920721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.579 [2024-07-12 12:40:09.920731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33384 len:8 PRP1 0x0 PRP2 0x0 00:15:50.579 [2024-07-12 12:40:09.920744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.579 [2024-07-12 12:40:09.920758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.579 [2024-07-12 12:40:09.920768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.579 [2024-07-12 12:40:09.920779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33392 len:8 PRP1 0x0 PRP2 0x0 00:15:50.579 [2024-07-12 12:40:09.920792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.579 [2024-07-12 12:40:09.920806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.579 [2024-07-12 12:40:09.920816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.579 [2024-07-12 12:40:09.920827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33400 len:8 PRP1 0x0 PRP2 0x0 00:15:50.579 [2024-07-12 12:40:09.920840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.579 [2024-07-12 12:40:09.920853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:50.579 [2024-07-12 12:40:09.920863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:50.579 [2024-07-12 12:40:09.920873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33408 len:8 PRP1 0x0 PRP2 0x0 00:15:50.579 [2024-07-12 12:40:09.920887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:50.579 [2024-07-12 12:40:09.920961] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xcdbdd0 was disconnected and freed. reset controller. 00:15:50.579 [2024-07-12 12:40:09.920979] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:15:50.579 [2024-07-12 12:40:09.920994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:50.579 [2024-07-12 12:40:09.924838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:50.579 [2024-07-12 12:40:09.924880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc5a570 (9): Bad file descriptor 00:15:50.579 [2024-07-12 12:40:09.956978] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:50.579 00:15:50.579 Latency(us) 00:15:50.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.579 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:50.579 Verification LBA range: start 0x0 length 0x4000 00:15:50.579 NVMe0n1 : 15.01 9105.68 35.57 223.65 0.00 13687.55 659.08 24188.74 00:15:50.579 =================================================================================================================== 00:15:50.579 Total : 9105.68 35.57 223.65 0.00 13687.55 659.08 24188.74 00:15:50.579 Received shutdown signal, test time was about 15.000000 seconds 00:15:50.579 00:15:50.579 Latency(us) 00:15:50.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.579 =================================================================================================================== 00:15:50.579 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:50.579 12:40:15 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:50.579 12:40:15 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:50.579 12:40:15 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:50.579 12:40:15 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=76271 00:15:50.579 12:40:15 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 76271 /var/tmp/bdevperf.sock 00:15:50.579 12:40:15 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:50.579 12:40:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 76271 ']' 00:15:50.579 12:40:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:50.579 12:40:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.579 12:40:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:50.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:50.579 12:40:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.579 12:40:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:51.144 12:40:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.144 12:40:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:15:51.144 12:40:16 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:51.144 [2024-07-12 12:40:17.184913] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:51.144 12:40:17 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:51.412 [2024-07-12 12:40:17.409053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:51.412 12:40:17 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:51.684 NVMe0n1 00:15:51.942 12:40:17 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:52.200 00:15:52.200 12:40:18 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:52.458 00:15:52.458 12:40:18 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:52.458 12:40:18 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:52.717 12:40:18 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:52.974 12:40:18 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:56.255 12:40:21 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:56.255 12:40:21 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:56.255 12:40:22 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=76348 00:15:56.255 12:40:22 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:56.255 12:40:22 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 76348 00:15:57.657 0 00:15:57.657 12:40:23 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:57.657 [2024-07-12 12:40:16.010224] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:15:57.657 [2024-07-12 12:40:16.010331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76271 ] 00:15:57.657 [2024-07-12 12:40:16.145081] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.657 [2024-07-12 12:40:16.266263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.657 [2024-07-12 12:40:16.325871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:15:57.657 [2024-07-12 12:40:18.833905] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:15:57.657 [2024-07-12 12:40:18.834056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.657 [2024-07-12 12:40:18.834079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.657 [2024-07-12 12:40:18.834096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.657 [2024-07-12 12:40:18.834109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.657 [2024-07-12 12:40:18.834123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.657 [2024-07-12 12:40:18.834136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.657 [2024-07-12 12:40:18.834149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:57.657 [2024-07-12 12:40:18.834161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:57.657 [2024-07-12 12:40:18.834175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:15:57.657 [2024-07-12 12:40:18.834226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:57.657 [2024-07-12 12:40:18.834256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e15570 (9): Bad file descriptor 00:15:57.657 [2024-07-12 12:40:18.840761] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:57.657 Running I/O for 1 seconds... 00:15:57.657 00:15:57.657 Latency(us) 00:15:57.657 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.657 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:57.657 Verification LBA range: start 0x0 length 0x4000 00:15:57.657 NVMe0n1 : 1.02 7055.73 27.56 0.00 0.00 18067.24 2323.55 14954.12 00:15:57.657 =================================================================================================================== 00:15:57.657 Total : 7055.73 27.56 0.00 0.00 18067.24 2323.55 14954.12 00:15:57.657 12:40:23 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:57.657 12:40:23 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:57.657 12:40:23 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:57.915 12:40:23 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:57.915 12:40:23 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:58.174 12:40:24 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:58.431 12:40:24 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:16:01.749 12:40:27 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:01.749 12:40:27 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:16:01.749 12:40:27 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 76271 00:16:01.749 12:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76271 ']' 00:16:01.749 12:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76271 00:16:01.749 12:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:01.749 12:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:01.749 12:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76271 00:16:01.749 killing process with pid 76271 00:16:01.749 12:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:01.749 12:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:01.749 12:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76271' 00:16:01.749 12:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76271 00:16:01.749 12:40:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76271 00:16:02.007 12:40:27 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:16:02.007 12:40:27 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:02.265 rmmod nvme_tcp 00:16:02.265 rmmod nvme_fabrics 00:16:02.265 rmmod nvme_keyring 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 76017 ']' 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 76017 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 76017 ']' 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 76017 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76017 00:16:02.265 killing process with pid 76017 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76017' 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 76017 00:16:02.265 12:40:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 76017 00:16:02.523 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:02.523 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:02.523 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:02.523 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.523 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:02.523 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.523 12:40:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.523 12:40:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.523 12:40:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:02.523 ************************************ 00:16:02.523 END TEST nvmf_failover 00:16:02.523 ************************************ 00:16:02.523 00:16:02.523 real 0m32.996s 00:16:02.523 user 2m7.684s 00:16:02.523 sys 0m5.803s 00:16:02.523 12:40:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:02.523 12:40:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:16:02.781 12:40:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:02.781 12:40:28 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:02.781 12:40:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:02.781 12:40:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:02.781 12:40:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:02.781 ************************************ 00:16:02.781 START TEST nvmf_host_discovery 00:16:02.781 ************************************ 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:02.781 * Looking for test storage... 00:16:02.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:02.781 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:02.782 Cannot find device "nvmf_tgt_br" 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:02.782 Cannot find device "nvmf_tgt_br2" 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:02.782 Cannot find device "nvmf_tgt_br" 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:02.782 Cannot find device "nvmf_tgt_br2" 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:02.782 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:03.040 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:03.040 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:03.040 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:03.041 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:03.041 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:03.041 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:03.041 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:03.041 12:40:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:03.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:16:03.041 00:16:03.041 --- 10.0.0.2 ping statistics --- 00:16:03.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.041 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:03.041 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:03.041 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:16:03.041 00:16:03.041 --- 10.0.0.3 ping statistics --- 00:16:03.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.041 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:03.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:16:03.041 00:16:03.041 --- 10.0.0.1 ping statistics --- 00:16:03.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.041 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=76616 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 76616 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76616 ']' 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:03.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:03.041 12:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.299 [2024-07-12 12:40:29.137820] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:16:03.299 [2024-07-12 12:40:29.137914] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.299 [2024-07-12 12:40:29.274047] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.557 [2024-07-12 12:40:29.378501] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.557 [2024-07-12 12:40:29.378587] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.557 [2024-07-12 12:40:29.378600] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.557 [2024-07-12 12:40:29.378609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.557 [2024-07-12 12:40:29.378617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.557 [2024-07-12 12:40:29.378642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.557 [2024-07-12 12:40:29.435235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:04.123 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:04.123 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:04.123 12:40:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:04.123 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:04.123 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.380 [2024-07-12 12:40:30.203168] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.380 [2024-07-12 12:40:30.211294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.380 null0 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.380 null1 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=76648 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 76648 /tmp/host.sock 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 76648 ']' 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:04.380 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:04.380 12:40:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:04.380 [2024-07-12 12:40:30.300475] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:16:04.380 [2024-07-12 12:40:30.300597] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76648 ] 00:16:04.380 [2024-07-12 12:40:30.443176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.653 [2024-07-12 12:40:30.601988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.653 [2024-07-12 12:40:30.663464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:05.231 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.488 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:05.488 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:05.488 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.488 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.488 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.488 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:16:05.488 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:05.489 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.746 [2024-07-12 12:40:31.575736] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:05.746 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:16:05.747 12:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:16:06.312 [2024-07-12 12:40:32.226495] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:06.312 [2024-07-12 12:40:32.226541] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:06.312 [2024-07-12 12:40:32.226565] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:06.312 [2024-07-12 12:40:32.232532] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:06.312 [2024-07-12 12:40:32.290014] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:06.312 [2024-07-12 12:40:32.290045] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:06.878 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.139 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:16:07.139 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:07.139 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:07.139 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:07.139 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:07.139 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:07.139 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:07.139 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:07.139 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:07.139 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:07.139 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:07.139 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.139 12:40:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:07.139 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.139 12:40:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:07.139 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.140 [2024-07-12 12:40:33.169319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:07.140 [2024-07-12 12:40:33.170276] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:07.140 [2024-07-12 12:40:33.170306] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:07.140 [2024-07-12 12:40:33.176277] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:07.140 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:07.398 [2024-07-12 12:40:33.240548] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:07.398 [2024-07-12 12:40:33.240577] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:07.398 [2024-07-12 12:40:33.240586] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:07.398 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.399 [2024-07-12 12:40:33.397986] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:07.399 [2024-07-12 12:40:33.398042] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:07.399 [2024-07-12 12:40:33.403971] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:07.399 [2024-07-12 12:40:33.404020] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:07.399 [2024-07-12 12:40:33.404129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.399 [2024-07-12 12:40:33.404167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.399 [2024-07-12 12:40:33.404179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.399 [2024-07-12 12:40:33.404189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:07.399 [2024-07-12 12:40:33.404215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.399 [2024-07-12 12:40:33.404242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.399 [2024-07-12 12:40:33.404252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:07.399 [2024-07-12 12:40:33.404261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:07.399 [2024-07-12 12:40:33.404271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16cc600 is same with the state(5) to be set 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:07.399 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:07.656 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:07.657 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.914 12:40:33 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.852 [2024-07-12 12:40:34.828348] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:08.852 [2024-07-12 12:40:34.828389] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:08.852 [2024-07-12 12:40:34.828419] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:08.852 [2024-07-12 12:40:34.834388] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:08.852 [2024-07-12 12:40:34.895237] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:08.852 [2024-07-12 12:40:34.895314] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.852 request: 00:16:08.852 { 00:16:08.852 "name": "nvme", 00:16:08.852 "trtype": "tcp", 00:16:08.852 "traddr": "10.0.0.2", 00:16:08.852 "adrfam": "ipv4", 00:16:08.852 "trsvcid": "8009", 00:16:08.852 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:08.852 "wait_for_attach": true, 00:16:08.852 "method": "bdev_nvme_start_discovery", 00:16:08.852 "req_id": 1 00:16:08.852 } 00:16:08.852 Got JSON-RPC error response 00:16:08.852 response: 00:16:08.852 { 00:16:08.852 "code": -17, 00:16:08.852 "message": "File exists" 00:16:08.852 } 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:08.852 12:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:09.117 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.117 12:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:09.117 12:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:16:09.117 12:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:09.117 12:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:09.118 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.118 12:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:09.118 12:40:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.118 12:40:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.118 request: 00:16:09.118 { 00:16:09.118 "name": "nvme_second", 00:16:09.118 "trtype": "tcp", 00:16:09.118 "traddr": "10.0.0.2", 00:16:09.118 "adrfam": "ipv4", 00:16:09.118 "trsvcid": "8009", 00:16:09.118 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:09.118 "wait_for_attach": true, 00:16:09.118 "method": "bdev_nvme_start_discovery", 00:16:09.118 "req_id": 1 00:16:09.118 } 00:16:09.118 Got JSON-RPC error response 00:16:09.118 response: 00:16:09.118 { 00:16:09.118 "code": -17, 00:16:09.118 "message": "File exists" 00:16:09.118 } 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.118 12:40:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:10.490 [2024-07-12 12:40:36.179988] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:10.490 [2024-07-12 12:40:36.180072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5f20 with addr=10.0.0.2, port=8010 00:16:10.490 [2024-07-12 12:40:36.180100] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:10.490 [2024-07-12 12:40:36.180113] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:10.490 [2024-07-12 12:40:36.180123] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:11.424 [2024-07-12 12:40:37.179973] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:11.424 [2024-07-12 12:40:37.180048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e5f20 with addr=10.0.0.2, port=8010 00:16:11.424 [2024-07-12 12:40:37.180095] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:11.424 [2024-07-12 12:40:37.180106] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:11.424 [2024-07-12 12:40:37.180116] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:12.357 [2024-07-12 12:40:38.179797] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:12.357 request: 00:16:12.357 { 00:16:12.357 "name": "nvme_second", 00:16:12.357 "trtype": "tcp", 00:16:12.357 "traddr": "10.0.0.2", 00:16:12.357 "adrfam": "ipv4", 00:16:12.357 "trsvcid": "8010", 00:16:12.357 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:12.357 "wait_for_attach": false, 00:16:12.357 "attach_timeout_ms": 3000, 00:16:12.357 "method": "bdev_nvme_start_discovery", 00:16:12.357 "req_id": 1 00:16:12.357 } 00:16:12.357 Got JSON-RPC error response 00:16:12.357 response: 00:16:12.357 { 00:16:12.357 "code": -110, 00:16:12.357 "message": "Connection timed out" 00:16:12.357 } 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 76648 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:12.357 rmmod nvme_tcp 00:16:12.357 rmmod nvme_fabrics 00:16:12.357 rmmod nvme_keyring 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 76616 ']' 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 76616 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 76616 ']' 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 76616 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76616 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76616' 00:16:12.357 killing process with pid 76616 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 76616 00:16:12.357 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 76616 00:16:12.616 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:12.616 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:12.616 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:12.616 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.616 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:12.616 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.616 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.616 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.616 12:40:38 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:12.616 00:16:12.616 real 0m10.059s 00:16:12.616 user 0m19.309s 00:16:12.616 sys 0m2.033s 00:16:12.616 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:12.616 ************************************ 00:16:12.616 END TEST nvmf_host_discovery 00:16:12.616 12:40:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:12.616 ************************************ 00:16:12.875 12:40:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:12.875 12:40:38 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:12.875 12:40:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:12.875 12:40:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.875 12:40:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:12.875 ************************************ 00:16:12.875 START TEST nvmf_host_multipath_status 00:16:12.875 ************************************ 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:16:12.875 * Looking for test storage... 00:16:12.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:16:12.875 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:12.876 Cannot find device "nvmf_tgt_br" 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.876 Cannot find device "nvmf_tgt_br2" 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:12.876 Cannot find device "nvmf_tgt_br" 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:12.876 Cannot find device "nvmf_tgt_br2" 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:16:12.876 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:13.215 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:13.215 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:13.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.215 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:16:13.215 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:13.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:13.215 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:16:13.215 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:13.215 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:13.215 12:40:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:13.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:16:13.215 00:16:13.215 --- 10.0.0.2 ping statistics --- 00:16:13.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.215 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:13.215 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:13.215 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:16:13.215 00:16:13.215 --- 10.0.0.3 ping statistics --- 00:16:13.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.215 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:13.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:13.215 00:16:13.215 --- 10.0.0.1 ping statistics --- 00:16:13.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.215 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=77095 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 77095 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 77095 ']' 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.215 12:40:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:13.473 [2024-07-12 12:40:39.301644] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:16:13.473 [2024-07-12 12:40:39.301785] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.473 [2024-07-12 12:40:39.444920] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:13.731 [2024-07-12 12:40:39.580412] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.731 [2024-07-12 12:40:39.580512] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.731 [2024-07-12 12:40:39.580527] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.731 [2024-07-12 12:40:39.580538] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.731 [2024-07-12 12:40:39.580547] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.731 [2024-07-12 12:40:39.580752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.731 [2024-07-12 12:40:39.580761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.731 [2024-07-12 12:40:39.652583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:14.294 12:40:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.294 12:40:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:16:14.294 12:40:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:14.294 12:40:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:14.294 12:40:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:14.294 12:40:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.294 12:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=77095 00:16:14.294 12:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:14.552 [2024-07-12 12:40:40.600833] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.552 12:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:15.117 Malloc0 00:16:15.117 12:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:16:15.374 12:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:15.374 12:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:15.632 [2024-07-12 12:40:41.685028] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:15.632 12:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:15.890 [2024-07-12 12:40:41.933166] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:15.890 12:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=77157 00:16:15.890 12:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:16:15.890 12:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:15.890 12:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 77157 /var/tmp/bdevperf.sock 00:16:15.890 12:40:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 77157 ']' 00:16:15.890 12:40:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:15.890 12:40:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:15.891 12:40:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:15.891 12:40:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.891 12:40:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:17.291 12:40:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.291 12:40:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:16:17.291 12:40:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:16:17.291 12:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:16:17.568 Nvme0n1 00:16:17.568 12:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:16:17.826 Nvme0n1 00:16:17.826 12:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:16:17.826 12:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:16:19.722 12:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:16:19.722 12:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:20.287 12:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:20.287 12:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:16:21.659 12:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:16:21.659 12:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:21.659 12:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.659 12:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:21.659 12:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:21.659 12:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:21.659 12:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.660 12:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:21.916 12:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:21.916 12:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:21.916 12:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:21.916 12:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:22.173 12:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.173 12:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:22.173 12:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.173 12:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:22.430 12:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.430 12:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:22.430 12:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.430 12:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:22.688 12:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.688 12:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:22.688 12:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:22.688 12:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:22.945 12:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:22.945 12:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:16:22.945 12:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:23.202 12:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:23.459 12:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:16:24.391 12:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:16:24.391 12:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:24.391 12:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.391 12:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:24.706 12:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:24.706 12:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:24.706 12:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:24.706 12:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:24.978 12:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:24.978 12:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:24.978 12:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:24.978 12:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.237 12:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.237 12:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:25.237 12:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.237 12:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:25.495 12:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.495 12:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:25.495 12:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:25.495 12:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.765 12:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:25.765 12:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:25.765 12:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:25.765 12:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:26.030 12:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:26.030 12:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:16:26.030 12:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:26.288 12:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:26.545 12:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:16:27.475 12:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:16:27.475 12:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:27.476 12:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.476 12:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:27.732 12:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:27.732 12:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:27.732 12:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:27.732 12:40:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:28.297 12:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:28.297 12:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:28.297 12:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.297 12:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:28.554 12:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.554 12:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:28.554 12:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.554 12:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:28.554 12:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.554 12:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:28.810 12:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.810 12:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:28.811 12:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:28.811 12:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:28.811 12:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:28.811 12:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:29.068 12:40:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:29.068 12:40:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:16:29.068 12:40:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:29.325 12:40:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:29.583 12:40:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:16:30.955 12:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:16:30.955 12:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:30.955 12:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.955 12:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:30.955 12:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:30.955 12:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:30.955 12:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:30.955 12:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:31.212 12:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:31.212 12:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:31.212 12:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.212 12:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:31.469 12:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.469 12:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:31.469 12:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:31.469 12:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.727 12:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:31.727 12:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:31.727 12:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:31.727 12:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:32.056 12:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:32.056 12:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:32.056 12:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:32.056 12:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:32.314 12:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:32.314 12:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:16:32.314 12:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:32.572 12:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:32.830 12:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:16:33.762 12:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:16:33.762 12:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:33.762 12:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:33.762 12:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:34.020 12:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:34.020 12:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:34.020 12:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.020 12:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:34.278 12:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:34.278 12:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:34.278 12:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.278 12:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:34.537 12:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.537 12:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:34.537 12:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:34.537 12:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.795 12:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:34.795 12:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:34.795 12:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:34.795 12:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:35.361 12:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:35.361 12:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:35.361 12:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:35.361 12:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:35.361 12:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:35.361 12:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:16:35.361 12:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:16:35.619 12:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:35.942 12:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:36.908 12:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:36.908 12:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:36.908 12:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:36.908 12:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:37.165 12:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:37.165 12:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:37.165 12:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:37.165 12:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.422 12:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.422 12:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:37.423 12:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:37.423 12:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.681 12:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.681 12:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:37.681 12:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:37.681 12:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:37.939 12:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:37.939 12:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:37.939 12:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:37.939 12:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.504 12:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:38.504 12:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:38.504 12:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:38.504 12:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:38.504 12:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:38.504 12:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:38.762 12:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:38.762 12:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:16:39.018 12:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:39.275 12:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:40.206 12:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:40.206 12:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:40.206 12:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.206 12:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:40.770 12:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:40.770 12:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:40.770 12:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:40.770 12:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:41.029 12:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.029 12:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:41.029 12:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.029 12:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:41.287 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.287 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:41.287 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.287 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:41.544 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.544 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:41.544 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:41.544 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:41.803 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:41.803 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:41.803 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:41.803 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:42.061 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:42.061 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:42.061 12:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:42.320 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:16:42.584 12:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:43.516 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:43.516 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:43.516 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.516 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:43.774 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:43.774 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:43.774 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:43.774 12:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:44.113 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.113 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:44.113 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:44.113 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.372 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.372 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:44.372 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:44.372 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.630 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.630 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:44.630 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:44.630 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:44.887 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:44.887 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:44.887 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:44.887 12:41:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:45.145 12:41:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:45.145 12:41:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:45.145 12:41:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:45.402 12:41:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:16:45.659 12:41:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:46.592 12:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:46.592 12:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:46.592 12:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:46.592 12:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:46.849 12:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:46.849 12:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:46.849 12:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:46.849 12:41:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.107 12:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.107 12:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:47.107 12:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.107 12:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:47.364 12:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.364 12:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:47.364 12:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.364 12:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:47.620 12:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.620 12:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:47.620 12:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:47.620 12:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.877 12:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:47.877 12:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:47.877 12:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:47.877 12:41:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:48.157 12:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:48.157 12:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:48.157 12:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:16:48.414 12:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:16:48.672 12:41:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:49.605 12:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:49.605 12:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:49.605 12:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:49.605 12:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:49.888 12:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:49.888 12:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:49.888 12:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:49.888 12:41:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.146 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:50.146 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:50.146 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:50.146 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.404 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.404 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:50.404 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.404 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:50.662 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.662 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:50.662 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.662 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:50.920 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:50.920 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:50.920 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:50.920 12:41:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:51.179 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:51.179 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 77157 00:16:51.179 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 77157 ']' 00:16:51.179 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 77157 00:16:51.179 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:16:51.179 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:51.179 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77157 00:16:51.179 killing process with pid 77157 00:16:51.179 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:16:51.179 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:16:51.179 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77157' 00:16:51.179 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 77157 00:16:51.179 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 77157 00:16:51.179 Connection closed with partial response: 00:16:51.179 00:16:51.179 00:16:51.443 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 77157 00:16:51.443 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:51.443 [2024-07-12 12:40:42.006067] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:16:51.443 [2024-07-12 12:40:42.006189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77157 ] 00:16:51.443 [2024-07-12 12:40:42.145776] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.443 [2024-07-12 12:40:42.276460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.443 [2024-07-12 12:40:42.331902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:51.443 Running I/O for 90 seconds... 00:16:51.443 [2024-07-12 12:40:58.480229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.443 [2024-07-12 12:40:58.480342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:51.443 [2024-07-12 12:40:58.480435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.443 [2024-07-12 12:40:58.480464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:51.443 [2024-07-12 12:40:58.480508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.480532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.480556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.480573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.480596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.480613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.480636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.480653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.480676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:62968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.480693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.480716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.480733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.480756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.480772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.480795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.480811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.480834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.480874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.480899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.480916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.480944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.480971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.480995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.481012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.481052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:63040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.481095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.481148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.481206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.481250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.481290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.481329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.481368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.481433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.481477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.481539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.481581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.481621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.481660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.481700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.481739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.481788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.481845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.481886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.481927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.444 [2024-07-12 12:40:58.481966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:51.444 [2024-07-12 12:40:58.481998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.444 [2024-07-12 12:40:58.482017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.482057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.482095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.482135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.482174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.482213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.482252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.482291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.482330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.482369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.482422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.482464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.482515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.482556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.482597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.482637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.445 [2024-07-12 12:40:58.482677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.445 [2024-07-12 12:40:58.482717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.445 [2024-07-12 12:40:58.482757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.445 [2024-07-12 12:40:58.482797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.445 [2024-07-12 12:40:58.482836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.445 [2024-07-12 12:40:58.482882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.445 [2024-07-12 12:40:58.482922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.445 [2024-07-12 12:40:58.482962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.482993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.445 [2024-07-12 12:40:58.483010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.483034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.445 [2024-07-12 12:40:58.483051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.483073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.445 [2024-07-12 12:40:58.483090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.483113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.445 [2024-07-12 12:40:58.483130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.483152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.445 [2024-07-12 12:40:58.483169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.483192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.483209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.483233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.483249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.483273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.483290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.483314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.483331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.483354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.483384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.483419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.483438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.483462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.483479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:51.445 [2024-07-12 12:40:58.483502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.445 [2024-07-12 12:40:58.483530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.483555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.483573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.483595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.483612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.483635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.483652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.483676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.483693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.483716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.483733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.483756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.483773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.483796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.483813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.483835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.483852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.483875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.483902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.483925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.483942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.483965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.483982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.484029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.484071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.484111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.484151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.484191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.446 [2024-07-12 12:40:58.484236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.446 [2024-07-12 12:40:58.484276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.446 [2024-07-12 12:40:58.484316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.446 [2024-07-12 12:40:58.484369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.446 [2024-07-12 12:40:58.484424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.446 [2024-07-12 12:40:58.484466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.446 [2024-07-12 12:40:58.484506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.446 [2024-07-12 12:40:58.484546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.484645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.484686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.484726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.484767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.484807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.484847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.484886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.446 [2024-07-12 12:40:58.484926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.446 [2024-07-12 12:40:58.484966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.484988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.446 [2024-07-12 12:40:58.485005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:51.446 [2024-07-12 12:40:58.485028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.446 [2024-07-12 12:40:58.485045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.447 [2024-07-12 12:40:58.485092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.447 [2024-07-12 12:40:58.485141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.447 [2024-07-12 12:40:58.485181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.447 [2024-07-12 12:40:58.485221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.447 [2024-07-12 12:40:58.485261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:40:58.485307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:40:58.485348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:40:58.485388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:40:58.485443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:40:58.485483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:40:58.485523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:40:58.485563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:40:58.485603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:40:58.485652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:40:58.485693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:40:58.485732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:40:58.485779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:40:58.485819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:40:58.485858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.485881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:40:58.485898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:40:58.486346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:40:58.486373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.549323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.447 [2024-07-12 12:41:14.549430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.549488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.447 [2024-07-12 12:41:14.549508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.549532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.447 [2024-07-12 12:41:14.549549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.549571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.447 [2024-07-12 12:41:14.549587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.549609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.447 [2024-07-12 12:41:14.549654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.549677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.447 [2024-07-12 12:41:14.549694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.549715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:41:14.549731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.549753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:41:14.549768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.549789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:71056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:41:14.549805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.549826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.447 [2024-07-12 12:41:14.549841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.549862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:41:14.549877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.549898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:41:14.549913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.549935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:41:14.549950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.549971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.447 [2024-07-12 12:41:14.549986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.550008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.447 [2024-07-12 12:41:14.550023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.550044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.447 [2024-07-12 12:41:14.550060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.550081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.447 [2024-07-12 12:41:14.550096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.550147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:41:14.550165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.550188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.447 [2024-07-12 12:41:14.550204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:51.447 [2024-07-12 12:41:14.550227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.448 [2024-07-12 12:41:14.550244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.550266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.448 [2024-07-12 12:41:14.550282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.550304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.550320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.550342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.550358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.550380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.550397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.550419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.550447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.550473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.550490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.550512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.550528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.550550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.550566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.550603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.448 [2024-07-12 12:41:14.550619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.550650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.448 [2024-07-12 12:41:14.550667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.550688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.448 [2024-07-12 12:41:14.550704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.550725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.448 [2024-07-12 12:41:14.550741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.550762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.448 [2024-07-12 12:41:14.550778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.550800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.448 [2024-07-12 12:41:14.550816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.550839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.448 [2024-07-12 12:41:14.550855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.552004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.448 [2024-07-12 12:41:14.552031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.552058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:71200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.448 [2024-07-12 12:41:14.552075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.552097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.448 [2024-07-12 12:41:14.552113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.552151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.552167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.552190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.552206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.552228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.552244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.552266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.552295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.552319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.552335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.552357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.552373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.552395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.552412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.553139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.553164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.553206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.553224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.553247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.553264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.553286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.448 [2024-07-12 12:41:14.553302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.553324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.448 [2024-07-12 12:41:14.553341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.553364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.448 [2024-07-12 12:41:14.553381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.553403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.448 [2024-07-12 12:41:14.553419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:51.448 [2024-07-12 12:41:14.553455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.448 [2024-07-12 12:41:14.553490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:51.449 [2024-07-12 12:41:14.553513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.449 [2024-07-12 12:41:14.553542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:51.449 [2024-07-12 12:41:14.553566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.449 [2024-07-12 12:41:14.553584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:51.449 [2024-07-12 12:41:14.553607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:51.449 [2024-07-12 12:41:14.553623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:51.449 [2024-07-12 12:41:14.553646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.449 [2024-07-12 12:41:14.553663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:51.449 [2024-07-12 12:41:14.553686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.449 [2024-07-12 12:41:14.553702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:51.449 [2024-07-12 12:41:14.553726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.449 [2024-07-12 12:41:14.553742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:51.449 [2024-07-12 12:41:14.553765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:51.449 [2024-07-12 12:41:14.553782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:51.449 Received shutdown signal, test time was about 33.212246 seconds 00:16:51.449 00:16:51.449 Latency(us) 00:16:51.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.449 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:51.449 Verification LBA range: start 0x0 length 0x4000 00:16:51.449 Nvme0n1 : 33.21 8383.62 32.75 0.00 0.00 15235.24 942.08 4026531.84 00:16:51.449 =================================================================================================================== 00:16:51.449 Total : 8383.62 32.75 0.00 0.00 15235.24 942.08 4026531.84 00:16:51.449 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:51.707 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:51.707 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:51.707 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:51.707 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:51.707 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:16:51.707 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:51.707 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:16:51.707 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.707 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:51.707 rmmod nvme_tcp 00:16:51.707 rmmod nvme_fabrics 00:16:51.707 rmmod nvme_keyring 00:16:51.707 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.707 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:16:51.707 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:16:51.707 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 77095 ']' 00:16:51.707 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 77095 00:16:51.707 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 77095 ']' 00:16:51.708 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 77095 00:16:51.708 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:16:51.708 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:51.708 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77095 00:16:51.708 killing process with pid 77095 00:16:51.708 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:51.708 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:51.708 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77095' 00:16:51.708 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 77095 00:16:51.708 12:41:17 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 77095 00:16:51.966 12:41:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:51.966 12:41:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:51.966 12:41:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:51.966 12:41:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:51.966 12:41:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:51.966 12:41:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.966 12:41:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.966 12:41:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.227 12:41:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:52.227 ************************************ 00:16:52.227 END TEST nvmf_host_multipath_status 00:16:52.227 ************************************ 00:16:52.227 00:16:52.227 real 0m39.325s 00:16:52.227 user 2m6.482s 00:16:52.227 sys 0m11.892s 00:16:52.227 12:41:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:52.227 12:41:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:52.227 12:41:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:52.227 12:41:18 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:52.227 12:41:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:52.227 12:41:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:52.227 12:41:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:52.227 ************************************ 00:16:52.227 START TEST nvmf_discovery_remove_ifc 00:16:52.227 ************************************ 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:52.227 * Looking for test storage... 00:16:52.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:52.227 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:52.228 Cannot find device "nvmf_tgt_br" 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:52.228 Cannot find device "nvmf_tgt_br2" 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:52.228 Cannot find device "nvmf_tgt_br" 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:52.228 Cannot find device "nvmf_tgt_br2" 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:16:52.228 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:52.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:52.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:52.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:16:52.486 00:16:52.486 --- 10.0.0.2 ping statistics --- 00:16:52.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.486 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:52.486 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:52.486 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:16:52.486 00:16:52.486 --- 10.0.0.3 ping statistics --- 00:16:52.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.486 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:52.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:52.486 00:16:52.486 --- 10.0.0.1 ping statistics --- 00:16:52.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.486 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:52.486 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:52.743 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:52.743 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:52.743 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:52.743 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.743 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=77942 00:16:52.743 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:52.743 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 77942 00:16:52.743 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77942 ']' 00:16:52.743 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.743 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:52.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.743 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.743 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:52.743 12:41:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:52.743 [2024-07-12 12:41:18.624338] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:16:52.743 [2024-07-12 12:41:18.624512] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.743 [2024-07-12 12:41:18.763760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.000 [2024-07-12 12:41:18.894071] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.000 [2024-07-12 12:41:18.894167] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.000 [2024-07-12 12:41:18.894190] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.000 [2024-07-12 12:41:18.894201] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.000 [2024-07-12 12:41:18.894210] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.000 [2024-07-12 12:41:18.894250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.000 [2024-07-12 12:41:18.955285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:53.564 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:53.564 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:16:53.564 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:53.564 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:53.564 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:53.821 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.821 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:53.821 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.821 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:53.821 [2024-07-12 12:41:19.678115] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.821 [2024-07-12 12:41:19.686207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:53.821 null0 00:16:53.821 [2024-07-12 12:41:19.718158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.821 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.821 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77974 00:16:53.821 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:53.821 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77974 /tmp/host.sock 00:16:53.821 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 77974 ']' 00:16:53.821 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:16:53.821 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:53.821 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:53.821 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:53.821 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:53.821 12:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:53.821 [2024-07-12 12:41:19.794696] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:16:53.821 [2024-07-12 12:41:19.794944] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77974 ] 00:16:54.079 [2024-07-12 12:41:19.932439] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.079 [2024-07-12 12:41:20.064901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.012 12:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.012 12:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:16:55.012 12:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:55.012 12:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:55.012 12:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.012 12:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:55.012 12:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.012 12:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:55.012 12:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.012 12:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:55.012 [2024-07-12 12:41:20.877624] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:16:55.012 12:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.012 12:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:55.012 12:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.012 12:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:55.970 [2024-07-12 12:41:21.931242] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:55.970 [2024-07-12 12:41:21.931292] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:55.970 [2024-07-12 12:41:21.931311] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:55.970 [2024-07-12 12:41:21.937304] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:55.970 [2024-07-12 12:41:21.994751] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:55.970 [2024-07-12 12:41:21.994833] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:55.970 [2024-07-12 12:41:21.994860] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:55.970 [2024-07-12 12:41:21.994880] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:55.970 [2024-07-12 12:41:21.994906] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:55.970 12:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.970 12:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:55.970 12:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:55.970 12:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:55.970 [2024-07-12 12:41:21.999918] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19cbde0 was disconnected and freed. delete nvme_qpair. 00:16:55.970 12:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.970 12:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:55.970 12:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:55.970 12:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:55.970 12:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:55.970 12:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.231 12:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:56.231 12:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:56.231 12:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:56.231 12:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:56.231 12:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:56.231 12:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:56.231 12:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.231 12:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:56.231 12:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:56.232 12:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:56.232 12:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:56.232 12:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.232 12:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:56.232 12:41:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:57.165 12:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:57.165 12:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:57.165 12:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:57.165 12:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.165 12:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:57.165 12:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:57.165 12:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:57.165 12:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.165 12:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:57.165 12:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:58.536 12:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:58.536 12:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:58.536 12:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.536 12:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:58.536 12:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:58.536 12:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:58.536 12:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:58.536 12:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.536 12:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:58.536 12:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:59.470 12:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:59.470 12:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:59.470 12:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.470 12:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:59.470 12:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:59.470 12:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:59.470 12:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:59.470 12:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.470 12:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:59.470 12:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:00.403 12:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:00.403 12:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:00.403 12:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.403 12:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:00.403 12:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:00.403 12:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:00.403 12:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:00.403 12:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.403 12:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:00.403 12:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:01.372 12:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:01.372 12:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:01.372 12:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:01.372 12:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.372 12:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:01.372 12:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:01.372 12:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:01.372 12:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.372 [2024-07-12 12:41:27.423178] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:01.372 [2024-07-12 12:41:27.423265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.372 [2024-07-12 12:41:27.423282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.372 [2024-07-12 12:41:27.423296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.372 [2024-07-12 12:41:27.423307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.372 [2024-07-12 12:41:27.423318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.372 [2024-07-12 12:41:27.423328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.372 [2024-07-12 12:41:27.423339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.372 [2024-07-12 12:41:27.423348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.372 [2024-07-12 12:41:27.423360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:01.372 [2024-07-12 12:41:27.423370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:01.372 [2024-07-12 12:41:27.423387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1931ac0 is same with the state(5) to be set 00:17:01.372 12:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:01.372 12:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:01.372 [2024-07-12 12:41:27.433169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1931ac0 (9): Bad file descriptor 00:17:01.372 [2024-07-12 12:41:27.443195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:02.746 12:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:02.746 12:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:02.746 12:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.746 12:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:02.746 12:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:02.746 12:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:02.746 12:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:02.746 [2024-07-12 12:41:28.502533] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:02.746 [2024-07-12 12:41:28.502643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1931ac0 with addr=10.0.0.2, port=4420 00:17:02.746 [2024-07-12 12:41:28.502680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1931ac0 is same with the state(5) to be set 00:17:02.746 [2024-07-12 12:41:28.502747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1931ac0 (9): Bad file descriptor 00:17:02.746 [2024-07-12 12:41:28.503620] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:02.746 [2024-07-12 12:41:28.503681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:02.746 [2024-07-12 12:41:28.503702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:02.746 [2024-07-12 12:41:28.503725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:02.746 [2024-07-12 12:41:28.503765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:02.746 [2024-07-12 12:41:28.503787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:02.746 12:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.746 12:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:02.746 12:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:03.678 [2024-07-12 12:41:29.503850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:03.678 [2024-07-12 12:41:29.503921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:03.678 [2024-07-12 12:41:29.503934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:03.678 [2024-07-12 12:41:29.503946] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:03.678 [2024-07-12 12:41:29.503971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.678 [2024-07-12 12:41:29.504003] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:03.678 [2024-07-12 12:41:29.504065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.678 [2024-07-12 12:41:29.504083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.678 [2024-07-12 12:41:29.504097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.678 [2024-07-12 12:41:29.504107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.678 [2024-07-12 12:41:29.504123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.678 [2024-07-12 12:41:29.504132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.678 [2024-07-12 12:41:29.504143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.678 [2024-07-12 12:41:29.504152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.678 [2024-07-12 12:41:29.504163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.678 [2024-07-12 12:41:29.504172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.678 [2024-07-12 12:41:29.504182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:03.678 [2024-07-12 12:41:29.504653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1935860 (9): Bad file descriptor 00:17:03.678 [2024-07-12 12:41:29.505659] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:03.678 [2024-07-12 12:41:29.505681] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:03.678 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:03.678 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:03.678 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:03.678 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.678 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:03.678 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:03.678 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:03.678 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.678 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:03.678 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:03.678 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:03.678 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:03.679 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:03.679 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:03.679 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:03.679 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:03.679 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.679 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:03.679 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:03.679 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.679 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:03.679 12:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:04.611 12:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:04.611 12:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:04.611 12:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:04.611 12:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.611 12:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:04.611 12:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:04.612 12:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:04.612 12:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.869 12:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:04.869 12:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:05.803 [2024-07-12 12:41:31.515882] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:05.803 [2024-07-12 12:41:31.515944] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:05.803 [2024-07-12 12:41:31.515964] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:05.803 [2024-07-12 12:41:31.521921] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:05.803 [2024-07-12 12:41:31.578514] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:05.803 [2024-07-12 12:41:31.578590] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:05.803 [2024-07-12 12:41:31.578616] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:05.803 [2024-07-12 12:41:31.578633] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:05.803 [2024-07-12 12:41:31.578643] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:05.803 [2024-07-12 12:41:31.584618] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19d8d90 was disconnected and freed. delete nvme_qpair. 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77974 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77974 ']' 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77974 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77974 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:05.803 killing process with pid 77974 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77974' 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77974 00:17:05.803 12:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77974 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:06.152 rmmod nvme_tcp 00:17:06.152 rmmod nvme_fabrics 00:17:06.152 rmmod nvme_keyring 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 77942 ']' 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 77942 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 77942 ']' 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 77942 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77942 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:06.152 killing process with pid 77942 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77942' 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 77942 00:17:06.152 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 77942 00:17:06.410 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:06.410 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:06.410 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:06.410 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.410 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.410 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.410 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.410 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.410 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:06.410 ************************************ 00:17:06.410 END TEST nvmf_discovery_remove_ifc 00:17:06.410 ************************************ 00:17:06.410 00:17:06.410 real 0m14.351s 00:17:06.410 user 0m24.840s 00:17:06.410 sys 0m2.514s 00:17:06.410 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:06.410 12:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:17:06.668 12:41:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:06.668 12:41:32 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:06.668 12:41:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:06.668 12:41:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:06.668 12:41:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:06.668 ************************************ 00:17:06.668 START TEST nvmf_identify_kernel_target 00:17:06.668 ************************************ 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:17:06.668 * Looking for test storage... 00:17:06.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.668 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:06.669 Cannot find device "nvmf_tgt_br" 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:06.669 Cannot find device "nvmf_tgt_br2" 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:06.669 Cannot find device "nvmf_tgt_br" 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:06.669 Cannot find device "nvmf_tgt_br2" 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:06.669 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:06.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:06.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:06.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:17:06.927 00:17:06.927 --- 10.0.0.2 ping statistics --- 00:17:06.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.927 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:06.927 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:06.927 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:17:06.927 00:17:06.927 --- 10.0.0.3 ping statistics --- 00:17:06.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.927 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:06.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:17:06.927 00:17:06.927 --- 10.0.0.1 ping statistics --- 00:17:06.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.927 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:06.927 12:41:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:07.493 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:07.493 Waiting for block devices as requested 00:17:07.493 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:07.493 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:07.493 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:07.493 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:07.493 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:07.493 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:07.493 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:07.493 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:07.493 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:07.493 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:07.493 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:07.750 No valid GPT data, bailing 00:17:07.750 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:07.750 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:07.750 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:07.750 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:07.750 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:07.750 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:07.750 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:07.750 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:07.750 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:07.750 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:07.750 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:07.750 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:07.750 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:07.750 No valid GPT data, bailing 00:17:07.750 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:07.751 No valid GPT data, bailing 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:07.751 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:08.007 No valid GPT data, bailing 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid=16360ad5-8c23-4d49-afe0-9a35c426fec5 -a 10.0.0.1 -t tcp -s 4420 00:17:08.008 00:17:08.008 Discovery Log Number of Records 2, Generation counter 2 00:17:08.008 =====Discovery Log Entry 0====== 00:17:08.008 trtype: tcp 00:17:08.008 adrfam: ipv4 00:17:08.008 subtype: current discovery subsystem 00:17:08.008 treq: not specified, sq flow control disable supported 00:17:08.008 portid: 1 00:17:08.008 trsvcid: 4420 00:17:08.008 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:08.008 traddr: 10.0.0.1 00:17:08.008 eflags: none 00:17:08.008 sectype: none 00:17:08.008 =====Discovery Log Entry 1====== 00:17:08.008 trtype: tcp 00:17:08.008 adrfam: ipv4 00:17:08.008 subtype: nvme subsystem 00:17:08.008 treq: not specified, sq flow control disable supported 00:17:08.008 portid: 1 00:17:08.008 trsvcid: 4420 00:17:08.008 subnqn: nqn.2016-06.io.spdk:testnqn 00:17:08.008 traddr: 10.0.0.1 00:17:08.008 eflags: none 00:17:08.008 sectype: none 00:17:08.008 12:41:33 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:17:08.008 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:17:08.267 ===================================================== 00:17:08.267 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:08.267 ===================================================== 00:17:08.267 Controller Capabilities/Features 00:17:08.267 ================================ 00:17:08.267 Vendor ID: 0000 00:17:08.267 Subsystem Vendor ID: 0000 00:17:08.267 Serial Number: b2d438d4c10da72d53f1 00:17:08.267 Model Number: Linux 00:17:08.267 Firmware Version: 6.7.0-68 00:17:08.267 Recommended Arb Burst: 0 00:17:08.267 IEEE OUI Identifier: 00 00 00 00:17:08.267 Multi-path I/O 00:17:08.267 May have multiple subsystem ports: No 00:17:08.267 May have multiple controllers: No 00:17:08.267 Associated with SR-IOV VF: No 00:17:08.267 Max Data Transfer Size: Unlimited 00:17:08.267 Max Number of Namespaces: 0 00:17:08.267 Max Number of I/O Queues: 1024 00:17:08.267 NVMe Specification Version (VS): 1.3 00:17:08.267 NVMe Specification Version (Identify): 1.3 00:17:08.267 Maximum Queue Entries: 1024 00:17:08.267 Contiguous Queues Required: No 00:17:08.267 Arbitration Mechanisms Supported 00:17:08.267 Weighted Round Robin: Not Supported 00:17:08.267 Vendor Specific: Not Supported 00:17:08.267 Reset Timeout: 7500 ms 00:17:08.267 Doorbell Stride: 4 bytes 00:17:08.267 NVM Subsystem Reset: Not Supported 00:17:08.267 Command Sets Supported 00:17:08.267 NVM Command Set: Supported 00:17:08.267 Boot Partition: Not Supported 00:17:08.267 Memory Page Size Minimum: 4096 bytes 00:17:08.267 Memory Page Size Maximum: 4096 bytes 00:17:08.267 Persistent Memory Region: Not Supported 00:17:08.267 Optional Asynchronous Events Supported 00:17:08.267 Namespace Attribute Notices: Not Supported 00:17:08.267 Firmware Activation Notices: Not Supported 00:17:08.267 ANA Change Notices: Not Supported 00:17:08.267 PLE Aggregate Log Change Notices: Not Supported 00:17:08.267 LBA Status Info Alert Notices: Not Supported 00:17:08.267 EGE Aggregate Log Change Notices: Not Supported 00:17:08.267 Normal NVM Subsystem Shutdown event: Not Supported 00:17:08.267 Zone Descriptor Change Notices: Not Supported 00:17:08.267 Discovery Log Change Notices: Supported 00:17:08.267 Controller Attributes 00:17:08.267 128-bit Host Identifier: Not Supported 00:17:08.267 Non-Operational Permissive Mode: Not Supported 00:17:08.267 NVM Sets: Not Supported 00:17:08.267 Read Recovery Levels: Not Supported 00:17:08.267 Endurance Groups: Not Supported 00:17:08.267 Predictable Latency Mode: Not Supported 00:17:08.267 Traffic Based Keep ALive: Not Supported 00:17:08.267 Namespace Granularity: Not Supported 00:17:08.267 SQ Associations: Not Supported 00:17:08.267 UUID List: Not Supported 00:17:08.267 Multi-Domain Subsystem: Not Supported 00:17:08.267 Fixed Capacity Management: Not Supported 00:17:08.267 Variable Capacity Management: Not Supported 00:17:08.267 Delete Endurance Group: Not Supported 00:17:08.267 Delete NVM Set: Not Supported 00:17:08.267 Extended LBA Formats Supported: Not Supported 00:17:08.267 Flexible Data Placement Supported: Not Supported 00:17:08.267 00:17:08.267 Controller Memory Buffer Support 00:17:08.267 ================================ 00:17:08.267 Supported: No 00:17:08.267 00:17:08.267 Persistent Memory Region Support 00:17:08.267 ================================ 00:17:08.267 Supported: No 00:17:08.267 00:17:08.267 Admin Command Set Attributes 00:17:08.267 ============================ 00:17:08.267 Security Send/Receive: Not Supported 00:17:08.267 Format NVM: Not Supported 00:17:08.267 Firmware Activate/Download: Not Supported 00:17:08.267 Namespace Management: Not Supported 00:17:08.267 Device Self-Test: Not Supported 00:17:08.267 Directives: Not Supported 00:17:08.267 NVMe-MI: Not Supported 00:17:08.267 Virtualization Management: Not Supported 00:17:08.267 Doorbell Buffer Config: Not Supported 00:17:08.267 Get LBA Status Capability: Not Supported 00:17:08.267 Command & Feature Lockdown Capability: Not Supported 00:17:08.267 Abort Command Limit: 1 00:17:08.267 Async Event Request Limit: 1 00:17:08.267 Number of Firmware Slots: N/A 00:17:08.267 Firmware Slot 1 Read-Only: N/A 00:17:08.267 Firmware Activation Without Reset: N/A 00:17:08.267 Multiple Update Detection Support: N/A 00:17:08.267 Firmware Update Granularity: No Information Provided 00:17:08.267 Per-Namespace SMART Log: No 00:17:08.267 Asymmetric Namespace Access Log Page: Not Supported 00:17:08.267 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:08.267 Command Effects Log Page: Not Supported 00:17:08.267 Get Log Page Extended Data: Supported 00:17:08.267 Telemetry Log Pages: Not Supported 00:17:08.267 Persistent Event Log Pages: Not Supported 00:17:08.267 Supported Log Pages Log Page: May Support 00:17:08.267 Commands Supported & Effects Log Page: Not Supported 00:17:08.267 Feature Identifiers & Effects Log Page:May Support 00:17:08.267 NVMe-MI Commands & Effects Log Page: May Support 00:17:08.267 Data Area 4 for Telemetry Log: Not Supported 00:17:08.267 Error Log Page Entries Supported: 1 00:17:08.267 Keep Alive: Not Supported 00:17:08.267 00:17:08.267 NVM Command Set Attributes 00:17:08.267 ========================== 00:17:08.267 Submission Queue Entry Size 00:17:08.267 Max: 1 00:17:08.267 Min: 1 00:17:08.267 Completion Queue Entry Size 00:17:08.267 Max: 1 00:17:08.267 Min: 1 00:17:08.267 Number of Namespaces: 0 00:17:08.267 Compare Command: Not Supported 00:17:08.267 Write Uncorrectable Command: Not Supported 00:17:08.267 Dataset Management Command: Not Supported 00:17:08.267 Write Zeroes Command: Not Supported 00:17:08.267 Set Features Save Field: Not Supported 00:17:08.267 Reservations: Not Supported 00:17:08.267 Timestamp: Not Supported 00:17:08.268 Copy: Not Supported 00:17:08.268 Volatile Write Cache: Not Present 00:17:08.268 Atomic Write Unit (Normal): 1 00:17:08.268 Atomic Write Unit (PFail): 1 00:17:08.268 Atomic Compare & Write Unit: 1 00:17:08.268 Fused Compare & Write: Not Supported 00:17:08.268 Scatter-Gather List 00:17:08.268 SGL Command Set: Supported 00:17:08.268 SGL Keyed: Not Supported 00:17:08.268 SGL Bit Bucket Descriptor: Not Supported 00:17:08.268 SGL Metadata Pointer: Not Supported 00:17:08.268 Oversized SGL: Not Supported 00:17:08.268 SGL Metadata Address: Not Supported 00:17:08.268 SGL Offset: Supported 00:17:08.268 Transport SGL Data Block: Not Supported 00:17:08.268 Replay Protected Memory Block: Not Supported 00:17:08.268 00:17:08.268 Firmware Slot Information 00:17:08.268 ========================= 00:17:08.268 Active slot: 0 00:17:08.268 00:17:08.268 00:17:08.268 Error Log 00:17:08.268 ========= 00:17:08.268 00:17:08.268 Active Namespaces 00:17:08.268 ================= 00:17:08.268 Discovery Log Page 00:17:08.268 ================== 00:17:08.268 Generation Counter: 2 00:17:08.268 Number of Records: 2 00:17:08.268 Record Format: 0 00:17:08.268 00:17:08.268 Discovery Log Entry 0 00:17:08.268 ---------------------- 00:17:08.268 Transport Type: 3 (TCP) 00:17:08.268 Address Family: 1 (IPv4) 00:17:08.268 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:08.268 Entry Flags: 00:17:08.268 Duplicate Returned Information: 0 00:17:08.268 Explicit Persistent Connection Support for Discovery: 0 00:17:08.268 Transport Requirements: 00:17:08.268 Secure Channel: Not Specified 00:17:08.268 Port ID: 1 (0x0001) 00:17:08.268 Controller ID: 65535 (0xffff) 00:17:08.268 Admin Max SQ Size: 32 00:17:08.268 Transport Service Identifier: 4420 00:17:08.268 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:08.268 Transport Address: 10.0.0.1 00:17:08.268 Discovery Log Entry 1 00:17:08.268 ---------------------- 00:17:08.268 Transport Type: 3 (TCP) 00:17:08.268 Address Family: 1 (IPv4) 00:17:08.268 Subsystem Type: 2 (NVM Subsystem) 00:17:08.268 Entry Flags: 00:17:08.268 Duplicate Returned Information: 0 00:17:08.268 Explicit Persistent Connection Support for Discovery: 0 00:17:08.268 Transport Requirements: 00:17:08.268 Secure Channel: Not Specified 00:17:08.268 Port ID: 1 (0x0001) 00:17:08.268 Controller ID: 65535 (0xffff) 00:17:08.268 Admin Max SQ Size: 32 00:17:08.268 Transport Service Identifier: 4420 00:17:08.268 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:17:08.268 Transport Address: 10.0.0.1 00:17:08.268 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:17:08.268 get_feature(0x01) failed 00:17:08.268 get_feature(0x02) failed 00:17:08.268 get_feature(0x04) failed 00:17:08.268 ===================================================== 00:17:08.268 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:17:08.268 ===================================================== 00:17:08.268 Controller Capabilities/Features 00:17:08.268 ================================ 00:17:08.268 Vendor ID: 0000 00:17:08.268 Subsystem Vendor ID: 0000 00:17:08.268 Serial Number: 51bc7816967f451514bf 00:17:08.268 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:17:08.268 Firmware Version: 6.7.0-68 00:17:08.268 Recommended Arb Burst: 6 00:17:08.268 IEEE OUI Identifier: 00 00 00 00:17:08.268 Multi-path I/O 00:17:08.268 May have multiple subsystem ports: Yes 00:17:08.268 May have multiple controllers: Yes 00:17:08.268 Associated with SR-IOV VF: No 00:17:08.268 Max Data Transfer Size: Unlimited 00:17:08.268 Max Number of Namespaces: 1024 00:17:08.268 Max Number of I/O Queues: 128 00:17:08.268 NVMe Specification Version (VS): 1.3 00:17:08.268 NVMe Specification Version (Identify): 1.3 00:17:08.268 Maximum Queue Entries: 1024 00:17:08.268 Contiguous Queues Required: No 00:17:08.268 Arbitration Mechanisms Supported 00:17:08.268 Weighted Round Robin: Not Supported 00:17:08.268 Vendor Specific: Not Supported 00:17:08.268 Reset Timeout: 7500 ms 00:17:08.268 Doorbell Stride: 4 bytes 00:17:08.268 NVM Subsystem Reset: Not Supported 00:17:08.268 Command Sets Supported 00:17:08.268 NVM Command Set: Supported 00:17:08.268 Boot Partition: Not Supported 00:17:08.268 Memory Page Size Minimum: 4096 bytes 00:17:08.268 Memory Page Size Maximum: 4096 bytes 00:17:08.268 Persistent Memory Region: Not Supported 00:17:08.268 Optional Asynchronous Events Supported 00:17:08.268 Namespace Attribute Notices: Supported 00:17:08.268 Firmware Activation Notices: Not Supported 00:17:08.268 ANA Change Notices: Supported 00:17:08.268 PLE Aggregate Log Change Notices: Not Supported 00:17:08.268 LBA Status Info Alert Notices: Not Supported 00:17:08.268 EGE Aggregate Log Change Notices: Not Supported 00:17:08.268 Normal NVM Subsystem Shutdown event: Not Supported 00:17:08.268 Zone Descriptor Change Notices: Not Supported 00:17:08.268 Discovery Log Change Notices: Not Supported 00:17:08.268 Controller Attributes 00:17:08.268 128-bit Host Identifier: Supported 00:17:08.268 Non-Operational Permissive Mode: Not Supported 00:17:08.268 NVM Sets: Not Supported 00:17:08.268 Read Recovery Levels: Not Supported 00:17:08.268 Endurance Groups: Not Supported 00:17:08.268 Predictable Latency Mode: Not Supported 00:17:08.268 Traffic Based Keep ALive: Supported 00:17:08.268 Namespace Granularity: Not Supported 00:17:08.268 SQ Associations: Not Supported 00:17:08.268 UUID List: Not Supported 00:17:08.268 Multi-Domain Subsystem: Not Supported 00:17:08.268 Fixed Capacity Management: Not Supported 00:17:08.268 Variable Capacity Management: Not Supported 00:17:08.268 Delete Endurance Group: Not Supported 00:17:08.268 Delete NVM Set: Not Supported 00:17:08.268 Extended LBA Formats Supported: Not Supported 00:17:08.268 Flexible Data Placement Supported: Not Supported 00:17:08.268 00:17:08.268 Controller Memory Buffer Support 00:17:08.268 ================================ 00:17:08.268 Supported: No 00:17:08.268 00:17:08.268 Persistent Memory Region Support 00:17:08.268 ================================ 00:17:08.268 Supported: No 00:17:08.268 00:17:08.268 Admin Command Set Attributes 00:17:08.268 ============================ 00:17:08.268 Security Send/Receive: Not Supported 00:17:08.268 Format NVM: Not Supported 00:17:08.268 Firmware Activate/Download: Not Supported 00:17:08.268 Namespace Management: Not Supported 00:17:08.268 Device Self-Test: Not Supported 00:17:08.268 Directives: Not Supported 00:17:08.268 NVMe-MI: Not Supported 00:17:08.268 Virtualization Management: Not Supported 00:17:08.268 Doorbell Buffer Config: Not Supported 00:17:08.268 Get LBA Status Capability: Not Supported 00:17:08.268 Command & Feature Lockdown Capability: Not Supported 00:17:08.268 Abort Command Limit: 4 00:17:08.268 Async Event Request Limit: 4 00:17:08.268 Number of Firmware Slots: N/A 00:17:08.268 Firmware Slot 1 Read-Only: N/A 00:17:08.268 Firmware Activation Without Reset: N/A 00:17:08.268 Multiple Update Detection Support: N/A 00:17:08.268 Firmware Update Granularity: No Information Provided 00:17:08.268 Per-Namespace SMART Log: Yes 00:17:08.268 Asymmetric Namespace Access Log Page: Supported 00:17:08.268 ANA Transition Time : 10 sec 00:17:08.268 00:17:08.268 Asymmetric Namespace Access Capabilities 00:17:08.268 ANA Optimized State : Supported 00:17:08.268 ANA Non-Optimized State : Supported 00:17:08.268 ANA Inaccessible State : Supported 00:17:08.268 ANA Persistent Loss State : Supported 00:17:08.268 ANA Change State : Supported 00:17:08.268 ANAGRPID is not changed : No 00:17:08.268 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:17:08.268 00:17:08.268 ANA Group Identifier Maximum : 128 00:17:08.268 Number of ANA Group Identifiers : 128 00:17:08.268 Max Number of Allowed Namespaces : 1024 00:17:08.268 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:17:08.268 Command Effects Log Page: Supported 00:17:08.268 Get Log Page Extended Data: Supported 00:17:08.268 Telemetry Log Pages: Not Supported 00:17:08.268 Persistent Event Log Pages: Not Supported 00:17:08.268 Supported Log Pages Log Page: May Support 00:17:08.268 Commands Supported & Effects Log Page: Not Supported 00:17:08.268 Feature Identifiers & Effects Log Page:May Support 00:17:08.268 NVMe-MI Commands & Effects Log Page: May Support 00:17:08.268 Data Area 4 for Telemetry Log: Not Supported 00:17:08.268 Error Log Page Entries Supported: 128 00:17:08.268 Keep Alive: Supported 00:17:08.268 Keep Alive Granularity: 1000 ms 00:17:08.268 00:17:08.268 NVM Command Set Attributes 00:17:08.268 ========================== 00:17:08.268 Submission Queue Entry Size 00:17:08.268 Max: 64 00:17:08.268 Min: 64 00:17:08.268 Completion Queue Entry Size 00:17:08.268 Max: 16 00:17:08.268 Min: 16 00:17:08.268 Number of Namespaces: 1024 00:17:08.268 Compare Command: Not Supported 00:17:08.268 Write Uncorrectable Command: Not Supported 00:17:08.268 Dataset Management Command: Supported 00:17:08.268 Write Zeroes Command: Supported 00:17:08.268 Set Features Save Field: Not Supported 00:17:08.269 Reservations: Not Supported 00:17:08.269 Timestamp: Not Supported 00:17:08.269 Copy: Not Supported 00:17:08.269 Volatile Write Cache: Present 00:17:08.269 Atomic Write Unit (Normal): 1 00:17:08.269 Atomic Write Unit (PFail): 1 00:17:08.269 Atomic Compare & Write Unit: 1 00:17:08.269 Fused Compare & Write: Not Supported 00:17:08.269 Scatter-Gather List 00:17:08.269 SGL Command Set: Supported 00:17:08.269 SGL Keyed: Not Supported 00:17:08.269 SGL Bit Bucket Descriptor: Not Supported 00:17:08.269 SGL Metadata Pointer: Not Supported 00:17:08.269 Oversized SGL: Not Supported 00:17:08.269 SGL Metadata Address: Not Supported 00:17:08.269 SGL Offset: Supported 00:17:08.269 Transport SGL Data Block: Not Supported 00:17:08.269 Replay Protected Memory Block: Not Supported 00:17:08.269 00:17:08.269 Firmware Slot Information 00:17:08.269 ========================= 00:17:08.269 Active slot: 0 00:17:08.269 00:17:08.269 Asymmetric Namespace Access 00:17:08.269 =========================== 00:17:08.269 Change Count : 0 00:17:08.269 Number of ANA Group Descriptors : 1 00:17:08.269 ANA Group Descriptor : 0 00:17:08.269 ANA Group ID : 1 00:17:08.269 Number of NSID Values : 1 00:17:08.269 Change Count : 0 00:17:08.269 ANA State : 1 00:17:08.269 Namespace Identifier : 1 00:17:08.269 00:17:08.269 Commands Supported and Effects 00:17:08.269 ============================== 00:17:08.269 Admin Commands 00:17:08.269 -------------- 00:17:08.269 Get Log Page (02h): Supported 00:17:08.269 Identify (06h): Supported 00:17:08.269 Abort (08h): Supported 00:17:08.269 Set Features (09h): Supported 00:17:08.269 Get Features (0Ah): Supported 00:17:08.269 Asynchronous Event Request (0Ch): Supported 00:17:08.269 Keep Alive (18h): Supported 00:17:08.269 I/O Commands 00:17:08.269 ------------ 00:17:08.269 Flush (00h): Supported 00:17:08.269 Write (01h): Supported LBA-Change 00:17:08.269 Read (02h): Supported 00:17:08.269 Write Zeroes (08h): Supported LBA-Change 00:17:08.269 Dataset Management (09h): Supported 00:17:08.269 00:17:08.269 Error Log 00:17:08.269 ========= 00:17:08.269 Entry: 0 00:17:08.269 Error Count: 0x3 00:17:08.269 Submission Queue Id: 0x0 00:17:08.269 Command Id: 0x5 00:17:08.269 Phase Bit: 0 00:17:08.269 Status Code: 0x2 00:17:08.269 Status Code Type: 0x0 00:17:08.269 Do Not Retry: 1 00:17:08.269 Error Location: 0x28 00:17:08.269 LBA: 0x0 00:17:08.269 Namespace: 0x0 00:17:08.269 Vendor Log Page: 0x0 00:17:08.269 ----------- 00:17:08.269 Entry: 1 00:17:08.269 Error Count: 0x2 00:17:08.269 Submission Queue Id: 0x0 00:17:08.269 Command Id: 0x5 00:17:08.269 Phase Bit: 0 00:17:08.269 Status Code: 0x2 00:17:08.269 Status Code Type: 0x0 00:17:08.269 Do Not Retry: 1 00:17:08.269 Error Location: 0x28 00:17:08.269 LBA: 0x0 00:17:08.269 Namespace: 0x0 00:17:08.269 Vendor Log Page: 0x0 00:17:08.269 ----------- 00:17:08.269 Entry: 2 00:17:08.269 Error Count: 0x1 00:17:08.269 Submission Queue Id: 0x0 00:17:08.269 Command Id: 0x4 00:17:08.269 Phase Bit: 0 00:17:08.269 Status Code: 0x2 00:17:08.269 Status Code Type: 0x0 00:17:08.269 Do Not Retry: 1 00:17:08.269 Error Location: 0x28 00:17:08.269 LBA: 0x0 00:17:08.269 Namespace: 0x0 00:17:08.269 Vendor Log Page: 0x0 00:17:08.269 00:17:08.269 Number of Queues 00:17:08.269 ================ 00:17:08.269 Number of I/O Submission Queues: 128 00:17:08.269 Number of I/O Completion Queues: 128 00:17:08.269 00:17:08.269 ZNS Specific Controller Data 00:17:08.269 ============================ 00:17:08.269 Zone Append Size Limit: 0 00:17:08.269 00:17:08.269 00:17:08.269 Active Namespaces 00:17:08.269 ================= 00:17:08.269 get_feature(0x05) failed 00:17:08.269 Namespace ID:1 00:17:08.269 Command Set Identifier: NVM (00h) 00:17:08.269 Deallocate: Supported 00:17:08.269 Deallocated/Unwritten Error: Not Supported 00:17:08.269 Deallocated Read Value: Unknown 00:17:08.269 Deallocate in Write Zeroes: Not Supported 00:17:08.269 Deallocated Guard Field: 0xFFFF 00:17:08.269 Flush: Supported 00:17:08.269 Reservation: Not Supported 00:17:08.269 Namespace Sharing Capabilities: Multiple Controllers 00:17:08.269 Size (in LBAs): 1310720 (5GiB) 00:17:08.269 Capacity (in LBAs): 1310720 (5GiB) 00:17:08.269 Utilization (in LBAs): 1310720 (5GiB) 00:17:08.269 UUID: 0543842d-cde9-4499-941d-f2d21a7e892b 00:17:08.269 Thin Provisioning: Not Supported 00:17:08.269 Per-NS Atomic Units: Yes 00:17:08.269 Atomic Boundary Size (Normal): 0 00:17:08.269 Atomic Boundary Size (PFail): 0 00:17:08.269 Atomic Boundary Offset: 0 00:17:08.269 NGUID/EUI64 Never Reused: No 00:17:08.269 ANA group ID: 1 00:17:08.269 Namespace Write Protected: No 00:17:08.269 Number of LBA Formats: 1 00:17:08.269 Current LBA Format: LBA Format #00 00:17:08.269 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:17:08.269 00:17:08.269 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:17:08.269 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:08.269 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:08.528 rmmod nvme_tcp 00:17:08.528 rmmod nvme_fabrics 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:08.528 12:41:34 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:09.093 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:09.351 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:09.351 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:09.351 00:17:09.351 real 0m2.856s 00:17:09.351 user 0m1.005s 00:17:09.351 sys 0m1.351s 00:17:09.351 12:41:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:09.351 12:41:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.351 ************************************ 00:17:09.351 END TEST nvmf_identify_kernel_target 00:17:09.351 ************************************ 00:17:09.351 12:41:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:09.351 12:41:35 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:09.351 12:41:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:09.351 12:41:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:09.351 12:41:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:09.351 ************************************ 00:17:09.351 START TEST nvmf_auth_host 00:17:09.351 ************************************ 00:17:09.351 12:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:17:09.620 * Looking for test storage... 00:17:09.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:09.620 Cannot find device "nvmf_tgt_br" 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:09.620 Cannot find device "nvmf_tgt_br2" 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:09.620 Cannot find device "nvmf_tgt_br" 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:09.620 Cannot find device "nvmf_tgt_br2" 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:09.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:09.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:09.620 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:09.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:17:09.877 00:17:09.877 --- 10.0.0.2 ping statistics --- 00:17:09.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.877 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:09.877 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:09.877 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:17:09.877 00:17:09.877 --- 10.0.0.3 ping statistics --- 00:17:09.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.877 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:09.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:09.877 00:17:09.877 --- 10.0.0.1 ping statistics --- 00:17:09.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.877 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:09.877 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=78857 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 78857 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78857 ']' 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.878 12:41:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2d61c522003df0df4ba7a7951147b481 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DoE 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2d61c522003df0df4ba7a7951147b481 0 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2d61c522003df0df4ba7a7951147b481 0 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2d61c522003df0df4ba7a7951147b481 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DoE 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DoE 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.DoE 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c890d9fb396e4b911dda63a606758cc12d7f68f14a8fb8fe4bfc0800638fb41b 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1wE 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c890d9fb396e4b911dda63a606758cc12d7f68f14a8fb8fe4bfc0800638fb41b 3 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c890d9fb396e4b911dda63a606758cc12d7f68f14a8fb8fe4bfc0800638fb41b 3 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c890d9fb396e4b911dda63a606758cc12d7f68f14a8fb8fe4bfc0800638fb41b 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1wE 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1wE 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.1wE 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:11.251 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6a09762b769eb6f4b11087c8850e0f37587b359b65c72fab 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.sf0 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6a09762b769eb6f4b11087c8850e0f37587b359b65c72fab 0 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6a09762b769eb6f4b11087c8850e0f37587b359b65c72fab 0 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6a09762b769eb6f4b11087c8850e0f37587b359b65c72fab 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.sf0 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.sf0 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.sf0 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ebcef9ae4581c1012d74c3e1b45136f58040329ebf7b754f 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.sar 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ebcef9ae4581c1012d74c3e1b45136f58040329ebf7b754f 2 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ebcef9ae4581c1012d74c3e1b45136f58040329ebf7b754f 2 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ebcef9ae4581c1012d74c3e1b45136f58040329ebf7b754f 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.sar 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.sar 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.sar 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4ff5aac1535d79e52560639255a05300 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:11.252 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.GlK 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4ff5aac1535d79e52560639255a05300 1 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4ff5aac1535d79e52560639255a05300 1 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4ff5aac1535d79e52560639255a05300 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.GlK 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.GlK 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.GlK 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a955eb8097201352508d2b342d246849 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.nw7 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a955eb8097201352508d2b342d246849 1 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a955eb8097201352508d2b342d246849 1 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a955eb8097201352508d2b342d246849 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.nw7 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.nw7 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.nw7 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9e9621044e4580771013c4264ae0a0f9b8d628afd9ce1d69 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.VYK 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9e9621044e4580771013c4264ae0a0f9b8d628afd9ce1d69 2 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9e9621044e4580771013c4264ae0a0f9b8d628afd9ce1d69 2 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9e9621044e4580771013c4264ae0a0f9b8d628afd9ce1d69 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.VYK 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.VYK 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.VYK 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=85730ec5159226c7343c40ec536d25d9 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Ss9 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 85730ec5159226c7343c40ec536d25d9 0 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 85730ec5159226c7343c40ec536d25d9 0 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=85730ec5159226c7343c40ec536d25d9 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:17:11.511 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:11.769 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Ss9 00:17:11.769 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Ss9 00:17:11.769 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Ss9 00:17:11.769 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:17:11.769 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:17:11.769 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:11.769 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7808e3db8c77900927d70abc15b55167616688416130a9a7c0d789ee2d725e36 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.pkI 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7808e3db8c77900927d70abc15b55167616688416130a9a7c0d789ee2d725e36 3 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7808e3db8c77900927d70abc15b55167616688416130a9a7c0d789ee2d725e36 3 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7808e3db8c77900927d70abc15b55167616688416130a9a7c0d789ee2d725e36 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.pkI 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.pkI 00:17:11.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.pkI 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78857 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 78857 ']' 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.770 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DoE 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.1wE ]] 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1wE 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.sf0 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.sar ]] 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.sar 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.GlK 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.nw7 ]] 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nw7 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.VYK 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.028 12:41:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Ss9 ]] 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Ss9 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.pkI 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:17:12.028 12:41:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:12.593 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:12.593 Waiting for block devices as requested 00:17:12.593 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:12.593 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:17:13.158 No valid GPT data, bailing 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:17:13.158 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:17:13.468 No valid GPT data, bailing 00:17:13.468 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:17:13.469 No valid GPT data, bailing 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:17:13.469 No valid GPT data, bailing 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid=16360ad5-8c23-4d49-afe0-9a35c426fec5 -a 10.0.0.1 -t tcp -s 4420 00:17:13.469 00:17:13.469 Discovery Log Number of Records 2, Generation counter 2 00:17:13.469 =====Discovery Log Entry 0====== 00:17:13.469 trtype: tcp 00:17:13.469 adrfam: ipv4 00:17:13.469 subtype: current discovery subsystem 00:17:13.469 treq: not specified, sq flow control disable supported 00:17:13.469 portid: 1 00:17:13.469 trsvcid: 4420 00:17:13.469 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:13.469 traddr: 10.0.0.1 00:17:13.469 eflags: none 00:17:13.469 sectype: none 00:17:13.469 =====Discovery Log Entry 1====== 00:17:13.469 trtype: tcp 00:17:13.469 adrfam: ipv4 00:17:13.469 subtype: nvme subsystem 00:17:13.469 treq: not specified, sq flow control disable supported 00:17:13.469 portid: 1 00:17:13.469 trsvcid: 4420 00:17:13.469 subnqn: nqn.2024-02.io.spdk:cnode0 00:17:13.469 traddr: 10.0.0.1 00:17:13.469 eflags: none 00:17:13.469 sectype: none 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.469 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:13.727 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:13.727 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:13.727 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:13.727 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:13.727 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:17:13.727 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:17:13.727 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:13.727 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:17:13.727 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.727 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:17:13.727 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:13.727 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:13.727 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.728 nvme0n1 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: ]] 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.728 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.986 nvme0n1 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:13.986 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.987 12:41:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.245 nvme0n1 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: ]] 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.245 nvme0n1 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.245 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: ]] 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.507 nvme0n1 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:14.507 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.508 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.764 nvme0n1 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:14.764 12:41:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: ]] 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.022 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.279 nvme0n1 00:17:15.279 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.279 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.279 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.279 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.279 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.279 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.279 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.279 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.279 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.279 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.279 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.279 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.279 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:17:15.279 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.280 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.537 nvme0n1 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: ]] 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.537 nvme0n1 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.537 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.538 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: ]] 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.796 nvme0n1 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:15.796 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.797 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.053 nvme0n1 00:17:16.053 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.053 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.053 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.053 12:41:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.053 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.053 12:41:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.053 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.053 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.053 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.053 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.053 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.053 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.053 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:16.053 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:17:16.053 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:16.053 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:16.053 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:16.053 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:16.053 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:16.054 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:16.054 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:16.054 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:16.617 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:16.617 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: ]] 00:17:16.617 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:16.617 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:17:16.617 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:16.617 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:16.617 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:16.617 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:16.617 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:16.617 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:16.617 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.617 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.874 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.874 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:16.874 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:16.874 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:16.874 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:16.874 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.874 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.874 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:16.875 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.875 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:16.875 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:16.875 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:16.875 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.875 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.875 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.875 nvme0n1 00:17:16.875 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.875 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.875 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.875 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:16.875 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:16.875 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.133 12:41:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.134 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.134 12:41:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.134 nvme0n1 00:17:17.134 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.134 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.134 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.134 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.134 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.134 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: ]] 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.392 nvme0n1 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.392 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: ]] 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.650 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.908 nvme0n1 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:17.908 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.909 12:41:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.166 nvme0n1 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:18.166 12:41:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: ]] 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.083 12:41:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.341 nvme0n1 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.341 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.598 nvme0n1 00:17:20.599 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.599 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.599 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.599 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:20.599 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.599 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: ]] 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.857 12:41:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.114 nvme0n1 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: ]] 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.114 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.679 nvme0n1 00:17:21.679 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.679 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.679 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.679 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.679 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.680 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.938 nvme0n1 00:17:21.938 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.938 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.938 12:41:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:21.938 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.938 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:21.938 12:41:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.938 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.938 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.938 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.938 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: ]] 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.196 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.763 nvme0n1 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.763 12:41:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.329 nvme0n1 00:17:23.329 12:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.329 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.329 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:23.329 12:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.329 12:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.329 12:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: ]] 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.587 12:41:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.153 nvme0n1 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: ]] 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.153 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.720 nvme0n1 00:17:24.720 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.720 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.720 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:24.720 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.720 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.720 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.978 12:41:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.544 nvme0n1 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: ]] 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.544 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.803 nvme0n1 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.803 nvme0n1 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.803 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: ]] 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.077 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.078 12:41:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.078 nvme0n1 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: ]] 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.078 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.337 nvme0n1 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.337 nvme0n1 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.337 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: ]] 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.633 nvme0n1 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.633 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.911 nvme0n1 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: ]] 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:26.911 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:26.912 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:26.912 12:41:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:26.912 12:41:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.912 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.912 12:41:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.173 nvme0n1 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: ]] 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.173 nvme0n1 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.173 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.431 nvme0n1 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.431 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: ]] 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.689 nvme0n1 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.689 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.690 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.690 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.948 nvme0n1 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:27.948 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.948 12:41:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:27.948 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.948 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: ]] 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.207 nvme0n1 00:17:28.207 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: ]] 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.466 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.723 nvme0n1 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.723 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.982 nvme0n1 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: ]] 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.982 12:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.241 nvme0n1 00:17:29.241 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.241 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.241 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.241 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.241 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.241 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.499 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.500 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.500 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.500 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.500 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.500 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.500 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.500 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.500 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.500 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.500 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.500 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.500 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.500 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.500 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.500 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.500 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.758 nvme0n1 00:17:29.758 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.758 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:29.758 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.758 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.758 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:29.758 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.758 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.758 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:29.758 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: ]] 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.759 12:41:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.330 nvme0n1 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: ]] 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.330 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.595 nvme0n1 00:17:30.595 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.595 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:30.595 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:30.595 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.595 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.595 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.853 12:41:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.112 nvme0n1 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: ]] 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.112 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.048 nvme0n1 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.048 12:41:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.613 nvme0n1 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: ]] 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:32.613 12:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:32.614 12:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:32.614 12:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:32.614 12:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:32.614 12:41:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:32.614 12:41:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.614 12:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.614 12:41:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.180 nvme0n1 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: ]] 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:33.180 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:33.181 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:33.181 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:33.181 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:33.181 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.181 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.114 nvme0n1 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.114 12:41:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.681 nvme0n1 00:17:34.681 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.681 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.681 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.681 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.681 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.681 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.681 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: ]] 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.682 nvme0n1 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.682 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.940 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.941 nvme0n1 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.941 12:42:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: ]] 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.941 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.198 nvme0n1 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.198 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: ]] 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.199 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.457 nvme0n1 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.457 nvme0n1 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.457 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: ]] 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.716 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.717 nvme0n1 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.717 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.975 nvme0n1 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: ]] 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.975 12:42:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.233 nvme0n1 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: ]] 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:36.233 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.234 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.491 nvme0n1 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:36.491 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.492 nvme0n1 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.492 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: ]] 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.750 nvme0n1 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:36.750 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.007 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.007 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.007 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.007 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.007 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.007 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.007 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:37.007 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.007 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.008 12:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.008 nvme0n1 00:17:37.008 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.008 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.008 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.008 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.008 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.008 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: ]] 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.265 nvme0n1 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.265 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: ]] 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.523 nvme0n1 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.523 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.781 nvme0n1 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.781 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: ]] 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.039 12:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.364 nvme0n1 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.364 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.639 nvme0n1 00:17:38.639 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.639 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:38.639 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:38.639 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.639 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: ]] 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.897 12:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.155 nvme0n1 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: ]] 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.155 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.720 nvme0n1 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:39.720 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.721 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.721 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.721 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:39.721 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:39.721 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:39.721 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:39.721 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:39.721 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:39.721 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:39.721 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:39.721 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:39.721 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:39.721 12:42:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:39.721 12:42:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:39.721 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.721 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.979 nvme0n1 00:17:39.979 12:42:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.979 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:39.979 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:39.979 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.979 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:39.979 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.979 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.979 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:39.979 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.979 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQ2MWM1MjIwMDNkZjBkZjRiYTdhNzk1MTE0N2I0ODGMXil2: 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: ]] 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yzg5MGQ5ZmIzOTZlNGI5MTFkZGE2M2E2MDY3NThjYzEyZDdmNjhmMTRhOGZiOGZlNGJmYzA4MDA2MzhmYjQxYtRhO90=: 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.237 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.238 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.238 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.238 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.238 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.238 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.238 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.238 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.238 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.238 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.804 nvme0n1 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.804 12:42:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.371 nvme0n1 00:17:41.371 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.371 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:41.371 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.371 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.371 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:41.630 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGZmNWFhYzE1MzVkNzllNTI1NjA2MzkyNTVhMDUzMDAWFVfc: 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: ]] 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk1NWViODA5NzIwMTM1MjUwOGQyYjM0MmQyNDY4NDkoyOqg: 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.631 12:42:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.198 nvme0n1 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWU5NjIxMDQ0ZTQ1ODA3NzEwMTNjNDI2NGFlMGEwZjliOGQ2MjhhZmQ5Y2UxZDY5kO91iA==: 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: ]] 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODU3MzBlYzUxNTkyMjZjNzM0M2M0MGVjNTM2ZDI1ZDkK2bAb: 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.198 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.814 nvme0n1 00:17:42.814 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.071 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzgwOGUzZGI4Yzc3OTAwOTI3ZDcwYWJjMTViNTUxNjc2MTY2ODg0MTYxMzBhOWE3YzBkNzg5ZWUyZDcyNWUzNgCgrKc=: 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.072 12:42:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.636 nvme0n1 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmEwOTc2MmI3NjllYjZmNGIxMTA4N2M4ODUwZTBmMzc1ODdiMzU5YjY1YzcyZmFinfjXAQ==: 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: ]] 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWJjZWY5YWU0NTgxYzEwMTJkNzRjM2UxYjQ1MTM2ZjU4MDQwMzI5ZWJmN2I3NTRmXDGGbg==: 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.636 request: 00:17:43.636 { 00:17:43.636 "name": "nvme0", 00:17:43.636 "trtype": "tcp", 00:17:43.636 "traddr": "10.0.0.1", 00:17:43.636 "adrfam": "ipv4", 00:17:43.636 "trsvcid": "4420", 00:17:43.636 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:43.636 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:43.636 "prchk_reftag": false, 00:17:43.636 "prchk_guard": false, 00:17:43.636 "hdgst": false, 00:17:43.636 "ddgst": false, 00:17:43.636 "method": "bdev_nvme_attach_controller", 00:17:43.636 "req_id": 1 00:17:43.636 } 00:17:43.636 Got JSON-RPC error response 00:17:43.636 response: 00:17:43.636 { 00:17:43.636 "code": -5, 00:17:43.636 "message": "Input/output error" 00:17:43.636 } 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:43.636 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.894 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.894 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:43.894 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.894 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.894 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:43.894 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:43.894 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.894 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.894 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.894 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.894 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.894 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.894 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.895 request: 00:17:43.895 { 00:17:43.895 "name": "nvme0", 00:17:43.895 "trtype": "tcp", 00:17:43.895 "traddr": "10.0.0.1", 00:17:43.895 "adrfam": "ipv4", 00:17:43.895 "trsvcid": "4420", 00:17:43.895 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:43.895 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:43.895 "prchk_reftag": false, 00:17:43.895 "prchk_guard": false, 00:17:43.895 "hdgst": false, 00:17:43.895 "ddgst": false, 00:17:43.895 "dhchap_key": "key2", 00:17:43.895 "method": "bdev_nvme_attach_controller", 00:17:43.895 "req_id": 1 00:17:43.895 } 00:17:43.895 Got JSON-RPC error response 00:17:43.895 response: 00:17:43.895 { 00:17:43.895 "code": -5, 00:17:43.895 "message": "Input/output error" 00:17:43.895 } 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.895 request: 00:17:43.895 { 00:17:43.895 "name": "nvme0", 00:17:43.895 "trtype": "tcp", 00:17:43.895 "traddr": "10.0.0.1", 00:17:43.895 "adrfam": "ipv4", 00:17:43.895 "trsvcid": "4420", 00:17:43.895 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:43.895 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:43.895 "prchk_reftag": false, 00:17:43.895 "prchk_guard": false, 00:17:43.895 "hdgst": false, 00:17:43.895 "ddgst": false, 00:17:43.895 "dhchap_key": "key1", 00:17:43.895 "dhchap_ctrlr_key": "ckey2", 00:17:43.895 "method": "bdev_nvme_attach_controller", 00:17:43.895 "req_id": 1 00:17:43.895 } 00:17:43.895 Got JSON-RPC error response 00:17:43.895 response: 00:17:43.895 { 00:17:43.895 "code": -5, 00:17:43.895 "message": "Input/output error" 00:17:43.895 } 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:43.895 rmmod nvme_tcp 00:17:43.895 rmmod nvme_fabrics 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 78857 ']' 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 78857 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 78857 ']' 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 78857 00:17:43.895 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:17:44.153 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:44.153 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78857 00:17:44.153 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:44.153 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:44.153 killing process with pid 78857 00:17:44.153 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78857' 00:17:44.153 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 78857 00:17:44.153 12:42:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 78857 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:44.411 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:44.412 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:44.412 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:17:44.412 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:17:44.412 12:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:44.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:45.234 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:45.234 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:45.234 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.DoE /tmp/spdk.key-null.sf0 /tmp/spdk.key-sha256.GlK /tmp/spdk.key-sha384.VYK /tmp/spdk.key-sha512.pkI /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:45.234 12:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:45.492 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:45.750 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:45.750 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:45.750 00:17:45.750 real 0m36.202s 00:17:45.750 user 0m32.441s 00:17:45.750 sys 0m3.821s 00:17:45.750 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:45.750 ************************************ 00:17:45.750 END TEST nvmf_auth_host 00:17:45.750 12:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.750 ************************************ 00:17:45.750 12:42:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:45.750 12:42:11 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:17:45.750 12:42:11 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:45.750 12:42:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:45.750 12:42:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:45.750 12:42:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:45.750 ************************************ 00:17:45.750 START TEST nvmf_digest 00:17:45.750 ************************************ 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:45.750 * Looking for test storage... 00:17:45.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.750 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:45.751 Cannot find device "nvmf_tgt_br" 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:17:45.751 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:46.008 Cannot find device "nvmf_tgt_br2" 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:46.008 Cannot find device "nvmf_tgt_br" 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:46.008 Cannot find device "nvmf_tgt_br2" 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:46.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:46.008 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:46.008 12:42:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:46.008 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:46.008 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:46.008 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:46.008 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:46.008 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:46.008 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:46.008 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:46.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:17:46.266 00:17:46.266 --- 10.0.0.2 ping statistics --- 00:17:46.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.266 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:46.266 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:46.266 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:17:46.266 00:17:46.266 --- 10.0.0.3 ping statistics --- 00:17:46.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.266 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:46.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:46.266 00:17:46.266 --- 10.0.0.1 ping statistics --- 00:17:46.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.266 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:46.266 ************************************ 00:17:46.266 START TEST nvmf_digest_clean 00:17:46.266 ************************************ 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=80434 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 80434 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80434 ']' 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.266 12:42:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:46.266 [2024-07-12 12:42:12.234343] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:17:46.267 [2024-07-12 12:42:12.234499] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.524 [2024-07-12 12:42:12.373151] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.524 [2024-07-12 12:42:12.494329] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.524 [2024-07-12 12:42:12.494398] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.524 [2024-07-12 12:42:12.494423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.524 [2024-07-12 12:42:12.494432] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.524 [2024-07-12 12:42:12.494440] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.524 [2024-07-12 12:42:12.494471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:47.458 [2024-07-12 12:42:13.321831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:47.458 null0 00:17:47.458 [2024-07-12 12:42:13.373796] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.458 [2024-07-12 12:42:13.397929] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80466 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80466 /var/tmp/bperf.sock 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80466 ']' 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:47.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.458 12:42:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:47.458 [2024-07-12 12:42:13.453381] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:17:47.458 [2024-07-12 12:42:13.453480] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80466 ] 00:17:47.715 [2024-07-12 12:42:13.586964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.715 [2024-07-12 12:42:13.693187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.698 12:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.698 12:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:48.698 12:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:48.698 12:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:48.698 12:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:48.954 [2024-07-12 12:42:14.778142] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:48.954 12:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.954 12:42:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:49.212 nvme0n1 00:17:49.212 12:42:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:49.212 12:42:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:49.469 Running I/O for 2 seconds... 00:17:51.383 00:17:51.383 Latency(us) 00:17:51.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.383 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:51.383 nvme0n1 : 2.00 14708.26 57.45 0.00 0.00 8695.13 8221.79 24665.37 00:17:51.383 =================================================================================================================== 00:17:51.383 Total : 14708.26 57.45 0.00 0.00 8695.13 8221.79 24665.37 00:17:51.383 0 00:17:51.383 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:51.383 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:51.383 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:51.383 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:51.383 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:51.383 | select(.opcode=="crc32c") 00:17:51.383 | "\(.module_name) \(.executed)"' 00:17:51.641 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:51.641 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:51.641 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:51.641 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:51.641 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80466 00:17:51.641 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80466 ']' 00:17:51.641 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80466 00:17:51.641 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:51.641 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:51.641 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80466 00:17:51.641 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:51.641 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:51.641 killing process with pid 80466 00:17:51.641 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80466' 00:17:51.641 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80466 00:17:51.641 Received shutdown signal, test time was about 2.000000 seconds 00:17:51.641 00:17:51.641 Latency(us) 00:17:51.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.641 =================================================================================================================== 00:17:51.641 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:51.641 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80466 00:17:51.898 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:51.898 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:51.898 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:51.898 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:51.898 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:51.898 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:51.898 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:51.898 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:51.898 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80532 00:17:51.898 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80532 /var/tmp/bperf.sock 00:17:51.898 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80532 ']' 00:17:51.898 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:51.898 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:51.898 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:51.898 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.898 12:42:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:51.898 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:51.898 Zero copy mechanism will not be used. 00:17:51.898 [2024-07-12 12:42:17.878578] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:17:51.898 [2024-07-12 12:42:17.878663] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80532 ] 00:17:52.156 [2024-07-12 12:42:18.013919] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.156 [2024-07-12 12:42:18.124323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.090 12:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.090 12:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:53.090 12:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:53.090 12:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:53.090 12:42:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:53.090 [2024-07-12 12:42:19.101975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:53.090 12:42:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:53.090 12:42:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:53.657 nvme0n1 00:17:53.657 12:42:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:53.657 12:42:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:53.657 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:53.657 Zero copy mechanism will not be used. 00:17:53.657 Running I/O for 2 seconds... 00:17:55.558 00:17:55.558 Latency(us) 00:17:55.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.558 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:55.558 nvme0n1 : 2.00 7292.88 911.61 0.00 0.00 2190.40 1906.50 7804.74 00:17:55.558 =================================================================================================================== 00:17:55.558 Total : 7292.88 911.61 0.00 0.00 2190.40 1906.50 7804.74 00:17:55.558 0 00:17:55.558 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:55.558 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:55.558 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:55.558 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:55.558 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:55.558 | select(.opcode=="crc32c") 00:17:55.558 | "\(.module_name) \(.executed)"' 00:17:55.816 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:55.816 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:55.816 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:55.816 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:55.816 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80532 00:17:55.816 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80532 ']' 00:17:55.816 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80532 00:17:55.816 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:17:55.816 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:55.816 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80532 00:17:56.074 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:56.074 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:56.074 killing process with pid 80532 00:17:56.074 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80532' 00:17:56.074 Received shutdown signal, test time was about 2.000000 seconds 00:17:56.074 00:17:56.074 Latency(us) 00:17:56.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.074 =================================================================================================================== 00:17:56.074 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:56.074 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80532 00:17:56.074 12:42:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80532 00:17:56.074 12:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:56.074 12:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:56.074 12:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:56.074 12:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:56.074 12:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:56.074 12:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:56.074 12:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:56.074 12:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80587 00:17:56.074 12:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80587 /var/tmp/bperf.sock 00:17:56.074 12:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:56.074 12:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80587 ']' 00:17:56.074 12:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:56.074 12:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:56.074 12:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:56.074 12:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.074 12:42:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:56.332 [2024-07-12 12:42:22.186020] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:17:56.332 [2024-07-12 12:42:22.186107] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80587 ] 00:17:56.332 [2024-07-12 12:42:22.323829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.588 [2024-07-12 12:42:22.429807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.153 12:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.153 12:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:17:57.153 12:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:57.153 12:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:57.153 12:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:57.410 [2024-07-12 12:42:23.409381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:17:57.410 12:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:57.410 12:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:57.975 nvme0n1 00:17:57.975 12:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:57.975 12:42:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:57.975 Running I/O for 2 seconds... 00:17:59.873 00:17:59.873 Latency(us) 00:17:59.873 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.873 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.873 nvme0n1 : 2.01 15582.29 60.87 0.00 0.00 8207.46 7119.59 17754.30 00:17:59.873 =================================================================================================================== 00:17:59.873 Total : 15582.29 60.87 0.00 0.00 8207.46 7119.59 17754.30 00:17:59.873 0 00:17:59.873 12:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:59.874 12:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:00.131 12:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:00.131 12:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:00.131 | select(.opcode=="crc32c") 00:18:00.131 | "\(.module_name) \(.executed)"' 00:18:00.132 12:42:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:00.132 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:00.132 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:00.132 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:00.132 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:00.132 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80587 00:18:00.132 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80587 ']' 00:18:00.132 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80587 00:18:00.132 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:00.132 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:00.132 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80587 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:00.390 killing process with pid 80587 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80587' 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80587 00:18:00.390 Received shutdown signal, test time was about 2.000000 seconds 00:18:00.390 00:18:00.390 Latency(us) 00:18:00.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.390 =================================================================================================================== 00:18:00.390 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80587 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=80647 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 80647 /var/tmp/bperf.sock 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 80647 ']' 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.390 12:42:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:00.647 [2024-07-12 12:42:26.536961] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:18:00.647 [2024-07-12 12:42:26.537131] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80647 ] 00:18:00.647 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:00.647 Zero copy mechanism will not be used. 00:18:00.647 [2024-07-12 12:42:26.688393] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.905 [2024-07-12 12:42:26.800250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.472 12:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.472 12:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:18:01.472 12:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:18:01.472 12:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:18:01.472 12:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:18:02.037 [2024-07-12 12:42:27.839503] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:02.037 12:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:02.037 12:42:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:02.295 nvme0n1 00:18:02.295 12:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:18:02.295 12:42:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:02.295 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:02.295 Zero copy mechanism will not be used. 00:18:02.295 Running I/O for 2 seconds... 00:18:04.855 00:18:04.855 Latency(us) 00:18:04.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.855 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:04.855 nvme0n1 : 2.00 6007.25 750.91 0.00 0.00 2657.57 2412.92 7089.80 00:18:04.855 =================================================================================================================== 00:18:04.855 Total : 6007.25 750.91 0.00 0.00 2657.57 2412.92 7089.80 00:18:04.855 0 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:18:04.855 | select(.opcode=="crc32c") 00:18:04.855 | "\(.module_name) \(.executed)"' 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 80647 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80647 ']' 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80647 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80647 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:04.855 killing process with pid 80647 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80647' 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80647 00:18:04.855 Received shutdown signal, test time was about 2.000000 seconds 00:18:04.855 00:18:04.855 Latency(us) 00:18:04.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.855 =================================================================================================================== 00:18:04.855 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80647 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 80434 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 80434 ']' 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 80434 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80434 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:04.855 killing process with pid 80434 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80434' 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 80434 00:18:04.855 12:42:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 80434 00:18:05.113 00:18:05.113 real 0m18.956s 00:18:05.113 user 0m36.874s 00:18:05.113 sys 0m4.686s 00:18:05.113 12:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:05.113 12:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:18:05.113 ************************************ 00:18:05.113 END TEST nvmf_digest_clean 00:18:05.113 ************************************ 00:18:05.113 12:42:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:05.113 12:42:31 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:18:05.113 12:42:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:05.113 12:42:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:05.113 12:42:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:05.113 ************************************ 00:18:05.113 START TEST nvmf_digest_error 00:18:05.113 ************************************ 00:18:05.113 12:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:18:05.113 12:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:18:05.113 12:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:05.113 12:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:05.113 12:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:05.371 12:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=80736 00:18:05.371 12:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 80736 00:18:05.371 12:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:05.371 12:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80736 ']' 00:18:05.371 12:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.371 12:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.371 12:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.371 12:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.372 12:42:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:05.372 [2024-07-12 12:42:31.241801] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:18:05.372 [2024-07-12 12:42:31.241898] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.372 [2024-07-12 12:42:31.378833] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.629 [2024-07-12 12:42:31.499120] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.629 [2024-07-12 12:42:31.499192] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.629 [2024-07-12 12:42:31.499204] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.629 [2024-07-12 12:42:31.499213] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.629 [2024-07-12 12:42:31.499225] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.629 [2024-07-12 12:42:31.499256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.194 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.194 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:06.194 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:06.194 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:06.194 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:06.194 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.194 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:18:06.194 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.194 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:06.194 [2024-07-12 12:42:32.235816] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:18:06.194 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.194 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:18:06.194 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:18:06.194 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.194 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:06.451 [2024-07-12 12:42:32.299450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:06.451 null0 00:18:06.451 [2024-07-12 12:42:32.357085] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.451 [2024-07-12 12:42:32.381220] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.451 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.451 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:18:06.451 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:06.451 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:06.451 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:06.451 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:06.451 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80768 00:18:06.451 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80768 /var/tmp/bperf.sock 00:18:06.451 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80768 ']' 00:18:06.451 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:06.451 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:06.451 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:06.451 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.451 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:06.451 12:42:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:18:06.451 [2024-07-12 12:42:32.432657] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:18:06.451 [2024-07-12 12:42:32.432734] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80768 ] 00:18:06.708 [2024-07-12 12:42:32.569343] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.708 [2024-07-12 12:42:32.678715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.708 [2024-07-12 12:42:32.731687] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:07.330 12:42:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.330 12:42:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:07.330 12:42:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:07.330 12:42:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:07.587 12:42:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:07.587 12:42:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.587 12:42:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:07.587 12:42:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.587 12:42:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:07.587 12:42:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:08.151 nvme0n1 00:18:08.151 12:42:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:08.151 12:42:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.151 12:42:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:08.151 12:42:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.151 12:42:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:08.151 12:42:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:08.151 Running I/O for 2 seconds... 00:18:08.151 [2024-07-12 12:42:34.182721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.151 [2024-07-12 12:42:34.182785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.151 [2024-07-12 12:42:34.182801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.151 [2024-07-12 12:42:34.199554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.151 [2024-07-12 12:42:34.199602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.151 [2024-07-12 12:42:34.199616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.151 [2024-07-12 12:42:34.216318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.151 [2024-07-12 12:42:34.216354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.151 [2024-07-12 12:42:34.216368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.410 [2024-07-12 12:42:34.233183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.410 [2024-07-12 12:42:34.233219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.410 [2024-07-12 12:42:34.233232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.410 [2024-07-12 12:42:34.249930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.410 [2024-07-12 12:42:34.249965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.410 [2024-07-12 12:42:34.249978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.410 [2024-07-12 12:42:34.266672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.410 [2024-07-12 12:42:34.266705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.410 [2024-07-12 12:42:34.266718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.410 [2024-07-12 12:42:34.283478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.410 [2024-07-12 12:42:34.283512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.410 [2024-07-12 12:42:34.283525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.410 [2024-07-12 12:42:34.300205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.410 [2024-07-12 12:42:34.300239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.410 [2024-07-12 12:42:34.300252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.410 [2024-07-12 12:42:34.316974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.410 [2024-07-12 12:42:34.317007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.410 [2024-07-12 12:42:34.317020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.410 [2024-07-12 12:42:34.333781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.410 [2024-07-12 12:42:34.333815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.410 [2024-07-12 12:42:34.333828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.410 [2024-07-12 12:42:34.350622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.410 [2024-07-12 12:42:34.350656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.410 [2024-07-12 12:42:34.350669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.410 [2024-07-12 12:42:34.367355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.410 [2024-07-12 12:42:34.367389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.410 [2024-07-12 12:42:34.367411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.410 [2024-07-12 12:42:34.384352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.410 [2024-07-12 12:42:34.384387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.410 [2024-07-12 12:42:34.384410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.410 [2024-07-12 12:42:34.401206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.410 [2024-07-12 12:42:34.401240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.410 [2024-07-12 12:42:34.401253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.410 [2024-07-12 12:42:34.417900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.410 [2024-07-12 12:42:34.417933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.410 [2024-07-12 12:42:34.417946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.410 [2024-07-12 12:42:34.434736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.410 [2024-07-12 12:42:34.434768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.410 [2024-07-12 12:42:34.434780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.410 [2024-07-12 12:42:34.451425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.410 [2024-07-12 12:42:34.451466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.410 [2024-07-12 12:42:34.451479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.410 [2024-07-12 12:42:34.468218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.410 [2024-07-12 12:42:34.468250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.410 [2024-07-12 12:42:34.468263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.668 [2024-07-12 12:42:34.485043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.668 [2024-07-12 12:42:34.485078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.668 [2024-07-12 12:42:34.485090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.668 [2024-07-12 12:42:34.501768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.668 [2024-07-12 12:42:34.501801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.668 [2024-07-12 12:42:34.501813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.668 [2024-07-12 12:42:34.518463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.668 [2024-07-12 12:42:34.518496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.668 [2024-07-12 12:42:34.518509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.668 [2024-07-12 12:42:34.535325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.668 [2024-07-12 12:42:34.535361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.668 [2024-07-12 12:42:34.535374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.668 [2024-07-12 12:42:34.552066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.668 [2024-07-12 12:42:34.552099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.668 [2024-07-12 12:42:34.552112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.668 [2024-07-12 12:42:34.568792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.668 [2024-07-12 12:42:34.568825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.669 [2024-07-12 12:42:34.568838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.669 [2024-07-12 12:42:34.585553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.669 [2024-07-12 12:42:34.585591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.669 [2024-07-12 12:42:34.585604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.669 [2024-07-12 12:42:34.602535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.669 [2024-07-12 12:42:34.602573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.669 [2024-07-12 12:42:34.602586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.669 [2024-07-12 12:42:34.619355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.669 [2024-07-12 12:42:34.619391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.669 [2024-07-12 12:42:34.619413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.669 [2024-07-12 12:42:34.636129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.669 [2024-07-12 12:42:34.636165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.669 [2024-07-12 12:42:34.636179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.669 [2024-07-12 12:42:34.653173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.669 [2024-07-12 12:42:34.653209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.669 [2024-07-12 12:42:34.653223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.669 [2024-07-12 12:42:34.670129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.669 [2024-07-12 12:42:34.670166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.669 [2024-07-12 12:42:34.670179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.669 [2024-07-12 12:42:34.686917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.669 [2024-07-12 12:42:34.686954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.669 [2024-07-12 12:42:34.686967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.669 [2024-07-12 12:42:34.703713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.669 [2024-07-12 12:42:34.703747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.669 [2024-07-12 12:42:34.703760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.669 [2024-07-12 12:42:34.720507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.669 [2024-07-12 12:42:34.720548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.669 [2024-07-12 12:42:34.720561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.669 [2024-07-12 12:42:34.737321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.669 [2024-07-12 12:42:34.737368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.669 [2024-07-12 12:42:34.737382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.927 [2024-07-12 12:42:34.754301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.927 [2024-07-12 12:42:34.754339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.927 [2024-07-12 12:42:34.754352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.927 [2024-07-12 12:42:34.771101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.927 [2024-07-12 12:42:34.771138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.927 [2024-07-12 12:42:34.771152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.927 [2024-07-12 12:42:34.787886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.927 [2024-07-12 12:42:34.787921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.927 [2024-07-12 12:42:34.787934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.927 [2024-07-12 12:42:34.804711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.927 [2024-07-12 12:42:34.804746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.927 [2024-07-12 12:42:34.804759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.927 [2024-07-12 12:42:34.821424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.927 [2024-07-12 12:42:34.821456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.927 [2024-07-12 12:42:34.821469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.927 [2024-07-12 12:42:34.838243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.927 [2024-07-12 12:42:34.838277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.927 [2024-07-12 12:42:34.838290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.927 [2024-07-12 12:42:34.855022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.927 [2024-07-12 12:42:34.855055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.927 [2024-07-12 12:42:34.855068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.927 [2024-07-12 12:42:34.871983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.927 [2024-07-12 12:42:34.872020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.927 [2024-07-12 12:42:34.872033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.927 [2024-07-12 12:42:34.888739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.927 [2024-07-12 12:42:34.888775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.927 [2024-07-12 12:42:34.888788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.927 [2024-07-12 12:42:34.905630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.927 [2024-07-12 12:42:34.905665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.927 [2024-07-12 12:42:34.905678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.927 [2024-07-12 12:42:34.922489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.927 [2024-07-12 12:42:34.922523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.927 [2024-07-12 12:42:34.922536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.927 [2024-07-12 12:42:34.939229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.927 [2024-07-12 12:42:34.939261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.927 [2024-07-12 12:42:34.939273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.927 [2024-07-12 12:42:34.956079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.927 [2024-07-12 12:42:34.956117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.927 [2024-07-12 12:42:34.956130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.927 [2024-07-12 12:42:34.972858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.927 [2024-07-12 12:42:34.972894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.927 [2024-07-12 12:42:34.972908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:08.927 [2024-07-12 12:42:34.989677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:08.927 [2024-07-12 12:42:34.989736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:08.927 [2024-07-12 12:42:34.989751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.185 [2024-07-12 12:42:35.006715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.185 [2024-07-12 12:42:35.006753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.185 [2024-07-12 12:42:35.006767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.185 [2024-07-12 12:42:35.023517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.185 [2024-07-12 12:42:35.023553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.185 [2024-07-12 12:42:35.023566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.185 [2024-07-12 12:42:35.040294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.185 [2024-07-12 12:42:35.040332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.185 [2024-07-12 12:42:35.040345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.185 [2024-07-12 12:42:35.057065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.185 [2024-07-12 12:42:35.057099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.185 [2024-07-12 12:42:35.057112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.185 [2024-07-12 12:42:35.073805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.185 [2024-07-12 12:42:35.073838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.185 [2024-07-12 12:42:35.073851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.185 [2024-07-12 12:42:35.090497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.185 [2024-07-12 12:42:35.090530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.185 [2024-07-12 12:42:35.090543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.185 [2024-07-12 12:42:35.107227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.185 [2024-07-12 12:42:35.107259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.185 [2024-07-12 12:42:35.107272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.185 [2024-07-12 12:42:35.124125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.185 [2024-07-12 12:42:35.124157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.185 [2024-07-12 12:42:35.124169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.185 [2024-07-12 12:42:35.140919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.185 [2024-07-12 12:42:35.140955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.185 [2024-07-12 12:42:35.140968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.185 [2024-07-12 12:42:35.157794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.185 [2024-07-12 12:42:35.157827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.185 [2024-07-12 12:42:35.157839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.185 [2024-07-12 12:42:35.174799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.185 [2024-07-12 12:42:35.174832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.185 [2024-07-12 12:42:35.174845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.185 [2024-07-12 12:42:35.191554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.185 [2024-07-12 12:42:35.191586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.185 [2024-07-12 12:42:35.191599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.185 [2024-07-12 12:42:35.208301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.185 [2024-07-12 12:42:35.208334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.185 [2024-07-12 12:42:35.208346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.185 [2024-07-12 12:42:35.224995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.185 [2024-07-12 12:42:35.225028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.185 [2024-07-12 12:42:35.225040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.185 [2024-07-12 12:42:35.248986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.185 [2024-07-12 12:42:35.249020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.185 [2024-07-12 12:42:35.249034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.443 [2024-07-12 12:42:35.265947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.443 [2024-07-12 12:42:35.266003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.443 [2024-07-12 12:42:35.266017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.443 [2024-07-12 12:42:35.282853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.443 [2024-07-12 12:42:35.282895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.443 [2024-07-12 12:42:35.282908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.443 [2024-07-12 12:42:35.299652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.443 [2024-07-12 12:42:35.299698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.443 [2024-07-12 12:42:35.299712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.443 [2024-07-12 12:42:35.316623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.443 [2024-07-12 12:42:35.316660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.443 [2024-07-12 12:42:35.316673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.443 [2024-07-12 12:42:35.333424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.443 [2024-07-12 12:42:35.333461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.443 [2024-07-12 12:42:35.333474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.443 [2024-07-12 12:42:35.350164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.443 [2024-07-12 12:42:35.350202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.443 [2024-07-12 12:42:35.350214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.443 [2024-07-12 12:42:35.366929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.443 [2024-07-12 12:42:35.366968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.443 [2024-07-12 12:42:35.366982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.443 [2024-07-12 12:42:35.383904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.443 [2024-07-12 12:42:35.383944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.443 [2024-07-12 12:42:35.383957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.443 [2024-07-12 12:42:35.400646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.443 [2024-07-12 12:42:35.400683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.443 [2024-07-12 12:42:35.400696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.443 [2024-07-12 12:42:35.417379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.443 [2024-07-12 12:42:35.417427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.443 [2024-07-12 12:42:35.417440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.443 [2024-07-12 12:42:35.434233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.443 [2024-07-12 12:42:35.434268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.443 [2024-07-12 12:42:35.434281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.444 [2024-07-12 12:42:35.450952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.444 [2024-07-12 12:42:35.450986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.444 [2024-07-12 12:42:35.451000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.444 [2024-07-12 12:42:35.467777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.444 [2024-07-12 12:42:35.467810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.444 [2024-07-12 12:42:35.467823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.444 [2024-07-12 12:42:35.484490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.444 [2024-07-12 12:42:35.484523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.444 [2024-07-12 12:42:35.484535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.444 [2024-07-12 12:42:35.501322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.444 [2024-07-12 12:42:35.501355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.444 [2024-07-12 12:42:35.501368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.702 [2024-07-12 12:42:35.518212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.702 [2024-07-12 12:42:35.518248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.702 [2024-07-12 12:42:35.518261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.702 [2024-07-12 12:42:35.534980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.702 [2024-07-12 12:42:35.535016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.702 [2024-07-12 12:42:35.535029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.702 [2024-07-12 12:42:35.551717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.702 [2024-07-12 12:42:35.551750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.702 [2024-07-12 12:42:35.551762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.702 [2024-07-12 12:42:35.568432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.702 [2024-07-12 12:42:35.568467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.702 [2024-07-12 12:42:35.568479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.702 [2024-07-12 12:42:35.585361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.702 [2024-07-12 12:42:35.585414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.702 [2024-07-12 12:42:35.585429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.702 [2024-07-12 12:42:35.602120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.702 [2024-07-12 12:42:35.602157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.702 [2024-07-12 12:42:35.602171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.702 [2024-07-12 12:42:35.619138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.702 [2024-07-12 12:42:35.619211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.702 [2024-07-12 12:42:35.619226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.702 [2024-07-12 12:42:35.636490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.702 [2024-07-12 12:42:35.636554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.702 [2024-07-12 12:42:35.636569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.702 [2024-07-12 12:42:35.653302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.702 [2024-07-12 12:42:35.653345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.702 [2024-07-12 12:42:35.653359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.702 [2024-07-12 12:42:35.670082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.702 [2024-07-12 12:42:35.670118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.702 [2024-07-12 12:42:35.670131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.702 [2024-07-12 12:42:35.687071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.702 [2024-07-12 12:42:35.687136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.702 [2024-07-12 12:42:35.687149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.702 [2024-07-12 12:42:35.704131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.702 [2024-07-12 12:42:35.704187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.702 [2024-07-12 12:42:35.704202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.702 [2024-07-12 12:42:35.720895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.702 [2024-07-12 12:42:35.720932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.702 [2024-07-12 12:42:35.720945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.702 [2024-07-12 12:42:35.737655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.702 [2024-07-12 12:42:35.737691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.702 [2024-07-12 12:42:35.737704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.702 [2024-07-12 12:42:35.754423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.702 [2024-07-12 12:42:35.754466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.702 [2024-07-12 12:42:35.754478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.702 [2024-07-12 12:42:35.771157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.702 [2024-07-12 12:42:35.771197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.702 [2024-07-12 12:42:35.771210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.960 [2024-07-12 12:42:35.788178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.960 [2024-07-12 12:42:35.788218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.960 [2024-07-12 12:42:35.788232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.960 [2024-07-12 12:42:35.805141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.960 [2024-07-12 12:42:35.805184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.960 [2024-07-12 12:42:35.805197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.960 [2024-07-12 12:42:35.822387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.960 [2024-07-12 12:42:35.822470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.960 [2024-07-12 12:42:35.822485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.960 [2024-07-12 12:42:35.839517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.960 [2024-07-12 12:42:35.839556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.960 [2024-07-12 12:42:35.839569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.960 [2024-07-12 12:42:35.856267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.960 [2024-07-12 12:42:35.856303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.960 [2024-07-12 12:42:35.856316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.960 [2024-07-12 12:42:35.873396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.960 [2024-07-12 12:42:35.873477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.960 [2024-07-12 12:42:35.873492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.960 [2024-07-12 12:42:35.890876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.960 [2024-07-12 12:42:35.890942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.960 [2024-07-12 12:42:35.890958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.960 [2024-07-12 12:42:35.907748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.960 [2024-07-12 12:42:35.907785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.960 [2024-07-12 12:42:35.907799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.960 [2024-07-12 12:42:35.924512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.960 [2024-07-12 12:42:35.924547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.960 [2024-07-12 12:42:35.924561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.960 [2024-07-12 12:42:35.941290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.960 [2024-07-12 12:42:35.941325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.960 [2024-07-12 12:42:35.941338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.960 [2024-07-12 12:42:35.958170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.960 [2024-07-12 12:42:35.958209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.960 [2024-07-12 12:42:35.958223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.960 [2024-07-12 12:42:35.974954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.960 [2024-07-12 12:42:35.974992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.960 [2024-07-12 12:42:35.975005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.960 [2024-07-12 12:42:35.992183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.960 [2024-07-12 12:42:35.992259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.960 [2024-07-12 12:42:35.992274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.960 [2024-07-12 12:42:36.009369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.960 [2024-07-12 12:42:36.009450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.960 [2024-07-12 12:42:36.009466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:09.960 [2024-07-12 12:42:36.026168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:09.961 [2024-07-12 12:42:36.026204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:09.961 [2024-07-12 12:42:36.026217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.218 [2024-07-12 12:42:36.043122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:10.218 [2024-07-12 12:42:36.043157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.218 [2024-07-12 12:42:36.043170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.218 [2024-07-12 12:42:36.060085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:10.218 [2024-07-12 12:42:36.060124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.218 [2024-07-12 12:42:36.060137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.218 [2024-07-12 12:42:36.076855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:10.218 [2024-07-12 12:42:36.076890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.218 [2024-07-12 12:42:36.076904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.218 [2024-07-12 12:42:36.093868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:10.218 [2024-07-12 12:42:36.093905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.218 [2024-07-12 12:42:36.093918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.218 [2024-07-12 12:42:36.110679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:10.218 [2024-07-12 12:42:36.110715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.218 [2024-07-12 12:42:36.110730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.218 [2024-07-12 12:42:36.127486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:10.218 [2024-07-12 12:42:36.127520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.218 [2024-07-12 12:42:36.127533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.218 [2024-07-12 12:42:36.144316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:10.218 [2024-07-12 12:42:36.144352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.218 [2024-07-12 12:42:36.144365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.218 [2024-07-12 12:42:36.160923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf5a020) 00:18:10.218 [2024-07-12 12:42:36.160959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:10.218 [2024-07-12 12:42:36.160972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:10.218 00:18:10.218 Latency(us) 00:18:10.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.218 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:18:10.218 nvme0n1 : 2.01 14997.40 58.58 0.00 0.00 8527.08 7923.90 32648.84 00:18:10.218 =================================================================================================================== 00:18:10.218 Total : 14997.40 58.58 0.00 0.00 8527.08 7923.90 32648.84 00:18:10.218 0 00:18:10.218 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:10.218 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:10.218 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:10.218 | .driver_specific 00:18:10.218 | .nvme_error 00:18:10.218 | .status_code 00:18:10.218 | .command_transient_transport_error' 00:18:10.218 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:10.476 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 118 > 0 )) 00:18:10.476 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80768 00:18:10.476 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80768 ']' 00:18:10.476 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80768 00:18:10.476 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:10.476 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.476 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80768 00:18:10.476 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:10.476 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:10.476 killing process with pid 80768 00:18:10.476 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80768' 00:18:10.476 Received shutdown signal, test time was about 2.000000 seconds 00:18:10.476 00:18:10.476 Latency(us) 00:18:10.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.476 =================================================================================================================== 00:18:10.476 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:10.476 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80768 00:18:10.476 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80768 00:18:10.792 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:18:10.792 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:10.792 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:18:10.792 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:10.792 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:10.792 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80823 00:18:10.792 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80823 /var/tmp/bperf.sock 00:18:10.792 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:18:10.792 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80823 ']' 00:18:10.792 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:10.792 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:10.792 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:10.792 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.792 12:42:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:10.792 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:10.792 Zero copy mechanism will not be used. 00:18:10.792 [2024-07-12 12:42:36.790437] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:18:10.792 [2024-07-12 12:42:36.790538] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80823 ] 00:18:11.059 [2024-07-12 12:42:36.930573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.059 [2024-07-12 12:42:37.045210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.059 [2024-07-12 12:42:37.100767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:11.993 12:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.993 12:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:11.993 12:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:11.993 12:42:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:11.993 12:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:11.993 12:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.993 12:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:11.993 12:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.994 12:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:11.994 12:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:12.559 nvme0n1 00:18:12.559 12:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:12.559 12:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.559 12:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:12.559 12:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.559 12:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:12.559 12:42:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:12.559 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:12.559 Zero copy mechanism will not be used. 00:18:12.559 Running I/O for 2 seconds... 00:18:12.559 [2024-07-12 12:42:38.492939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.559 [2024-07-12 12:42:38.493011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.559 [2024-07-12 12:42:38.493027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.497894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.497948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.497962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.502769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.502821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.502866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.507648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.507690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.507704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.512144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.512201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.512221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.517055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.517092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.517106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.522040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.522092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.522107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.526885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.526938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.526954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.531654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.531696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.531710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.536386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.536465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.536488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.541140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.541212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.541234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.545899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.545950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.545964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.550835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.550888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.550902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.555501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.555537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.555551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.560145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.560219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.560239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.564929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.564980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.565010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.569745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.569797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.569811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.574577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.574630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.574644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.579134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.579172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.579209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.584029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.584064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.584077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.589048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.589099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.589112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.593961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.594015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.594045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.598792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.598839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.598853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.603397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.603490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.603505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.608117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.608184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.608221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.612946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.612999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.613013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.617645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.617696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.617709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.622405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.622484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.622521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.560 [2024-07-12 12:42:38.627233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.560 [2024-07-12 12:42:38.627285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.560 [2024-07-12 12:42:38.627299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.819 [2024-07-12 12:42:38.632335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.819 [2024-07-12 12:42:38.632375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.819 [2024-07-12 12:42:38.632390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.637336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.637389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.637403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.642084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.642135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.642147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.646919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.646969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.646982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.651905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.651972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.651986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.656655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.656707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.656721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.661318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.661377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.661398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.666085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.666134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.666148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.671045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.671111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.671124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.675760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.675798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.675812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.680364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.680415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.680458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.685309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.685346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.685365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.690299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.690350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.690363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.694901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.694952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.694966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.699518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.699554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.699568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.704074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.704124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.704138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.708804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.708856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.708870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.713697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.713749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.713763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.718282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.718333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.718346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.723007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.723056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.723087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.727660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.727706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.727729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.732407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.732471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.732487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.820 [2024-07-12 12:42:38.737068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.820 [2024-07-12 12:42:38.737120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.820 [2024-07-12 12:42:38.737133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.741898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.741950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.741963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.746678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.746732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.746746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.751307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.751343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.751373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.756113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.756163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.756177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.760906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.760964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.760978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.765724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.765781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.765796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.770469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.770550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.770584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.775505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.775550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.775565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.780304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.780370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.780385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.784971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.785029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.785043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.789753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.789818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.789840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.794621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.794677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.794691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.799227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.799285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.799300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.803923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.803999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.804013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.808469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.808536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.808550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.813101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.813151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.813196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.817938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.817972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.818001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.822726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.822766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.822780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.827319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.827371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.827385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.832052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.832104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.832117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.836912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.836971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.836986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.841756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.821 [2024-07-12 12:42:38.841794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.821 [2024-07-12 12:42:38.841808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.821 [2024-07-12 12:42:38.846382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.822 [2024-07-12 12:42:38.846447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.822 [2024-07-12 12:42:38.846478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.822 [2024-07-12 12:42:38.851191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.822 [2024-07-12 12:42:38.851263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.822 [2024-07-12 12:42:38.851278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.822 [2024-07-12 12:42:38.855975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.822 [2024-07-12 12:42:38.856027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.822 [2024-07-12 12:42:38.856041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.822 [2024-07-12 12:42:38.860628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.822 [2024-07-12 12:42:38.860680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.822 [2024-07-12 12:42:38.860695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.822 [2024-07-12 12:42:38.865346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.822 [2024-07-12 12:42:38.865395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.822 [2024-07-12 12:42:38.865423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.822 [2024-07-12 12:42:38.870037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.822 [2024-07-12 12:42:38.870089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.822 [2024-07-12 12:42:38.870104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:12.822 [2024-07-12 12:42:38.874840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.822 [2024-07-12 12:42:38.874908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.822 [2024-07-12 12:42:38.874937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:12.822 [2024-07-12 12:42:38.879692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.822 [2024-07-12 12:42:38.879730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.822 [2024-07-12 12:42:38.879744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:12.822 [2024-07-12 12:42:38.884165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.822 [2024-07-12 12:42:38.884238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.822 [2024-07-12 12:42:38.884253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:12.822 [2024-07-12 12:42:38.889133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:12.822 [2024-07-12 12:42:38.889171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:12.822 [2024-07-12 12:42:38.889185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.082 [2024-07-12 12:42:38.893950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.082 [2024-07-12 12:42:38.894001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.082 [2024-07-12 12:42:38.894014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.082 [2024-07-12 12:42:38.898918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.082 [2024-07-12 12:42:38.898971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.082 [2024-07-12 12:42:38.898985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.082 [2024-07-12 12:42:38.903691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.082 [2024-07-12 12:42:38.903730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.082 [2024-07-12 12:42:38.903744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.082 [2024-07-12 12:42:38.908531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.082 [2024-07-12 12:42:38.908568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.082 [2024-07-12 12:42:38.908581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.082 [2024-07-12 12:42:38.913081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.082 [2024-07-12 12:42:38.913132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.082 [2024-07-12 12:42:38.913146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.082 [2024-07-12 12:42:38.917825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.082 [2024-07-12 12:42:38.917876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.082 [2024-07-12 12:42:38.917890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.082 [2024-07-12 12:42:38.922602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.082 [2024-07-12 12:42:38.922656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.082 [2024-07-12 12:42:38.922671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.082 [2024-07-12 12:42:38.927124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.082 [2024-07-12 12:42:38.927175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.082 [2024-07-12 12:42:38.927221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.082 [2024-07-12 12:42:38.931778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.082 [2024-07-12 12:42:38.931832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.082 [2024-07-12 12:42:38.931847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.082 [2024-07-12 12:42:38.936517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.082 [2024-07-12 12:42:38.936581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.082 [2024-07-12 12:42:38.936601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.082 [2024-07-12 12:42:38.941225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.082 [2024-07-12 12:42:38.941262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.082 [2024-07-12 12:42:38.941276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:38.946176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:38.946233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:38.946247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:38.950884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:38.950920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:38.950949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:38.955785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:38.955834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:38.955849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:38.960583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:38.960629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:38.960643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:38.965413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:38.965484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:38.965501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:38.970232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:38.970300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:38.970316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:38.975003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:38.975064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:38.975080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:38.979896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:38.979945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:38.979960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:38.984666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:38.984730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:38.984745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:38.989405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:38.989469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:38.989487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:38.994105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:38.994158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:38.994172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:38.998703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:38.998741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:38.998755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.003225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.003265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.003279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.007806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.007845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.007859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.012511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.012549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.012564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.017079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.017116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.017131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.021687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.021739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.021753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.026240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.026285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.026309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.030907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.030962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.030977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.035573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.035611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.035626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.040113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.040164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.040194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.044767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.044828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.044843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.049450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.049513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.049528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.054153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.054223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.054237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.058770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.058837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.058851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.063518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.063556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.063570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.067997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.068035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.068049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.072771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.072838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.072851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.083 [2024-07-12 12:42:39.077621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.083 [2024-07-12 12:42:39.077672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.083 [2024-07-12 12:42:39.077687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.084 [2024-07-12 12:42:39.082368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.084 [2024-07-12 12:42:39.082417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.084 [2024-07-12 12:42:39.082433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.084 [2024-07-12 12:42:39.087065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.084 [2024-07-12 12:42:39.087117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.084 [2024-07-12 12:42:39.087131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.084 [2024-07-12 12:42:39.091797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.084 [2024-07-12 12:42:39.091835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.084 [2024-07-12 12:42:39.091849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.084 [2024-07-12 12:42:39.096426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.084 [2024-07-12 12:42:39.096490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.084 [2024-07-12 12:42:39.096504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.084 [2024-07-12 12:42:39.101249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.084 [2024-07-12 12:42:39.101302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.084 [2024-07-12 12:42:39.101315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.084 [2024-07-12 12:42:39.105901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.084 [2024-07-12 12:42:39.105959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.084 [2024-07-12 12:42:39.105973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.084 [2024-07-12 12:42:39.110450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.084 [2024-07-12 12:42:39.110525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.084 [2024-07-12 12:42:39.110545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.084 [2024-07-12 12:42:39.115065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.084 [2024-07-12 12:42:39.115116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.084 [2024-07-12 12:42:39.115130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.084 [2024-07-12 12:42:39.119807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.084 [2024-07-12 12:42:39.119895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.084 [2024-07-12 12:42:39.119925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.084 [2024-07-12 12:42:39.124485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.084 [2024-07-12 12:42:39.124536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.084 [2024-07-12 12:42:39.124550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.084 [2024-07-12 12:42:39.129009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.084 [2024-07-12 12:42:39.129060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.084 [2024-07-12 12:42:39.129074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.084 [2024-07-12 12:42:39.133678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.084 [2024-07-12 12:42:39.133718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.084 [2024-07-12 12:42:39.133733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.084 [2024-07-12 12:42:39.138420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.084 [2024-07-12 12:42:39.138502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.084 [2024-07-12 12:42:39.138517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.084 [2024-07-12 12:42:39.143084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.084 [2024-07-12 12:42:39.143135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.084 [2024-07-12 12:42:39.143149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.084 [2024-07-12 12:42:39.147832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.084 [2024-07-12 12:42:39.147869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.084 [2024-07-12 12:42:39.147883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.084 [2024-07-12 12:42:39.152751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.084 [2024-07-12 12:42:39.152786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.084 [2024-07-12 12:42:39.152815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.344 [2024-07-12 12:42:39.157344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.344 [2024-07-12 12:42:39.157396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.344 [2024-07-12 12:42:39.157410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.344 [2024-07-12 12:42:39.162094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.344 [2024-07-12 12:42:39.162145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.344 [2024-07-12 12:42:39.162159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.344 [2024-07-12 12:42:39.167037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.344 [2024-07-12 12:42:39.167105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.344 [2024-07-12 12:42:39.167118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.344 [2024-07-12 12:42:39.171919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.344 [2024-07-12 12:42:39.171986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.344 [2024-07-12 12:42:39.172000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.344 [2024-07-12 12:42:39.176740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.344 [2024-07-12 12:42:39.176791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.344 [2024-07-12 12:42:39.176805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.344 [2024-07-12 12:42:39.181495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.344 [2024-07-12 12:42:39.181547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.344 [2024-07-12 12:42:39.181576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.344 [2024-07-12 12:42:39.186090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.344 [2024-07-12 12:42:39.186141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.344 [2024-07-12 12:42:39.186154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.344 [2024-07-12 12:42:39.190970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.344 [2024-07-12 12:42:39.191024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.344 [2024-07-12 12:42:39.191039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.344 [2024-07-12 12:42:39.195927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.344 [2024-07-12 12:42:39.195993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.344 [2024-07-12 12:42:39.196007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.344 [2024-07-12 12:42:39.200745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.344 [2024-07-12 12:42:39.200783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.344 [2024-07-12 12:42:39.200798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.344 [2024-07-12 12:42:39.205313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.344 [2024-07-12 12:42:39.205376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.344 [2024-07-12 12:42:39.205390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.344 [2024-07-12 12:42:39.210239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.344 [2024-07-12 12:42:39.210283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.344 [2024-07-12 12:42:39.210297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.344 [2024-07-12 12:42:39.215050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.344 [2024-07-12 12:42:39.215111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.344 [2024-07-12 12:42:39.215141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.344 [2024-07-12 12:42:39.219759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.344 [2024-07-12 12:42:39.219811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.344 [2024-07-12 12:42:39.219843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.224579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.224626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.224641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.229407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.229504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.229528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.234309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.234365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.234379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.239153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.239222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.239236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.243997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.244048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.244062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.248659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.248710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.248738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.253390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.253483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.253506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.258161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.258230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.258243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.262845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.262896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.262910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.267548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.267585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.267599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.272113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.272163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.272197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.276735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.276785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.276798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.281344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.281408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.281448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.286166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.286204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.286219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.290977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.291031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.291060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.295818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.295886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.295900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.300580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.300631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.300644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.305303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.305359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.305380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.310061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.310113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.310126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.314626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.314678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.314692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.319160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.319228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.319241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.323898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.323950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.323979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.328653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.328713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.328735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.333519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.333584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.333597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.338170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.338239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.338253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.342822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.342888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.342901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.347486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.347535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.347556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.352168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.352234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.352247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.357024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.345 [2024-07-12 12:42:39.357100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.345 [2024-07-12 12:42:39.357115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.345 [2024-07-12 12:42:39.361798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.346 [2024-07-12 12:42:39.361854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.346 [2024-07-12 12:42:39.361868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.346 [2024-07-12 12:42:39.366492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.346 [2024-07-12 12:42:39.366545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.346 [2024-07-12 12:42:39.366559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.346 [2024-07-12 12:42:39.371118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.346 [2024-07-12 12:42:39.371170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.346 [2024-07-12 12:42:39.371206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.346 [2024-07-12 12:42:39.375860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.346 [2024-07-12 12:42:39.375913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.346 [2024-07-12 12:42:39.375942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.346 [2024-07-12 12:42:39.380699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.346 [2024-07-12 12:42:39.380763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.346 [2024-07-12 12:42:39.380786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.346 [2024-07-12 12:42:39.385423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.346 [2024-07-12 12:42:39.385486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.346 [2024-07-12 12:42:39.385501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.346 [2024-07-12 12:42:39.390154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.346 [2024-07-12 12:42:39.390245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.346 [2024-07-12 12:42:39.390266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.346 [2024-07-12 12:42:39.394942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.346 [2024-07-12 12:42:39.394992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.346 [2024-07-12 12:42:39.395005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.346 [2024-07-12 12:42:39.399803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.346 [2024-07-12 12:42:39.399841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.346 [2024-07-12 12:42:39.399855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.346 [2024-07-12 12:42:39.404584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.346 [2024-07-12 12:42:39.404634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.346 [2024-07-12 12:42:39.404657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.346 [2024-07-12 12:42:39.409139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.346 [2024-07-12 12:42:39.409193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.346 [2024-07-12 12:42:39.409207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.346 [2024-07-12 12:42:39.413834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.346 [2024-07-12 12:42:39.413876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.346 [2024-07-12 12:42:39.413890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.418421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.418457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.606 [2024-07-12 12:42:39.418471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.422960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.423003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.606 [2024-07-12 12:42:39.423017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.427673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.427711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.606 [2024-07-12 12:42:39.427725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.432185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.432224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.606 [2024-07-12 12:42:39.432239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.436804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.436847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.606 [2024-07-12 12:42:39.436861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.441377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.441446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.606 [2024-07-12 12:42:39.441472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.446040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.446082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.606 [2024-07-12 12:42:39.446097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.450675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.450713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.606 [2024-07-12 12:42:39.450726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.455183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.455221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.606 [2024-07-12 12:42:39.455235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.459724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.459762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.606 [2024-07-12 12:42:39.459776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.464294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.464332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.606 [2024-07-12 12:42:39.464346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.468853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.468892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.606 [2024-07-12 12:42:39.468906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.473306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.473343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.606 [2024-07-12 12:42:39.473357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.477874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.477930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.606 [2024-07-12 12:42:39.477944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.482512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.482548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.606 [2024-07-12 12:42:39.482563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.487121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.487175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.606 [2024-07-12 12:42:39.487189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.606 [2024-07-12 12:42:39.491676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.606 [2024-07-12 12:42:39.491713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.491728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.496134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.496181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.496195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.500690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.500728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.500742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.505164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.505225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.505240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.509763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.509809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.509824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.514467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.514528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.514544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.519155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.519222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.519240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.523842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.523881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.523895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.528337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.528391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.528405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.533093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.533145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.533159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.537728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.537781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.537794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.542494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.542531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.542546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.547163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.547238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.547253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.551939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.551977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.551990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.556489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.556525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.556539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.561166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.561213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.561233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.565882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.565935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.565949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.570460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.570518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.570540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.575087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.575139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.575153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.579725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.579776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.579798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.584504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.584541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.584555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.588977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.589029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.589042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.593673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.593725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.593740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.598359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.598398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.598436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.603008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.603061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.607 [2024-07-12 12:42:39.603074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.607 [2024-07-12 12:42:39.607656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.607 [2024-07-12 12:42:39.607694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.608 [2024-07-12 12:42:39.607708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.608 [2024-07-12 12:42:39.612353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.608 [2024-07-12 12:42:39.612390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.608 [2024-07-12 12:42:39.612433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.608 [2024-07-12 12:42:39.616987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.608 [2024-07-12 12:42:39.617040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.608 [2024-07-12 12:42:39.617054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.608 [2024-07-12 12:42:39.621695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.608 [2024-07-12 12:42:39.621733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.608 [2024-07-12 12:42:39.621746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.608 [2024-07-12 12:42:39.626315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.608 [2024-07-12 12:42:39.626366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.608 [2024-07-12 12:42:39.626388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.608 [2024-07-12 12:42:39.631144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.608 [2024-07-12 12:42:39.631196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.608 [2024-07-12 12:42:39.631210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.608 [2024-07-12 12:42:39.635983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.608 [2024-07-12 12:42:39.636033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.608 [2024-07-12 12:42:39.636047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.608 [2024-07-12 12:42:39.640719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.608 [2024-07-12 12:42:39.640771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.608 [2024-07-12 12:42:39.640785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.608 [2024-07-12 12:42:39.645443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.608 [2024-07-12 12:42:39.645510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.608 [2024-07-12 12:42:39.645525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.608 [2024-07-12 12:42:39.650289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.608 [2024-07-12 12:42:39.650334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.608 [2024-07-12 12:42:39.650349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.608 [2024-07-12 12:42:39.655039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.608 [2024-07-12 12:42:39.655114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.608 [2024-07-12 12:42:39.655129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.608 [2024-07-12 12:42:39.660038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.608 [2024-07-12 12:42:39.660095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.608 [2024-07-12 12:42:39.660109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.608 [2024-07-12 12:42:39.664976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.608 [2024-07-12 12:42:39.665015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.608 [2024-07-12 12:42:39.665029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.608 [2024-07-12 12:42:39.669732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.608 [2024-07-12 12:42:39.669790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.608 [2024-07-12 12:42:39.669820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.608 [2024-07-12 12:42:39.674729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.608 [2024-07-12 12:42:39.674783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.608 [2024-07-12 12:42:39.674798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.868 [2024-07-12 12:42:39.679702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.679740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.679754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.684293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.684346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.684361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.689072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.689112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.689126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.693801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.693853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.693866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.698499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.698536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.698550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.703222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.703276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.703290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.708012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.708067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.708081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.712582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.712640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.712655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.717409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.717463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.717479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.722222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.722283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.722297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.726955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.727024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.727037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.731723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.731778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.731795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.736376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.736426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.736440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.740965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.741017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.741031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.745506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.745556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.745569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.750141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.750213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.750233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.754725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.754776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.754805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.759293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.759344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.759357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.763999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.764034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.764047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.768733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.768784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.768798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.773368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.773430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.773445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.778091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.778157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.778169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.782667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.782719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.782747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.787237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.787274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.787294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.791912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.791977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.791991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.796646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.796697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.796711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.801267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.801303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.801316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.806005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.806060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.806074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.811069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.869 [2024-07-12 12:42:39.811122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.869 [2024-07-12 12:42:39.811136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.869 [2024-07-12 12:42:39.816148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.816207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.816227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.820864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.820914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.820927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.825386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.825442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.825457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.830139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.830190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.830203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.834923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.834981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.834995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.839809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.839865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.839879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.844513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.844570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.844585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.849059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.849114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.849127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.853832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.853890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.853920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.858671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.858729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.858743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.863328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.863364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.863378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.868092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.868159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.868172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.872796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.872846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.872859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.877587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.877637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.877650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.882094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.882145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.882158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.886809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.886876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.886890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.891379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.891430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.891473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.895923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.895988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.896017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.900717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.900768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.900782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.905503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.905538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.905551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.910321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.910357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.910371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.915248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.915302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.915315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.920092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.920145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.920159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.924879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.924928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.924957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.929631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.929683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.929696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.934192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.934242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.934255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:13.870 [2024-07-12 12:42:39.939028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:13.870 [2024-07-12 12:42:39.939079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.870 [2024-07-12 12:42:39.939108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.130 [2024-07-12 12:42:39.943791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.130 [2024-07-12 12:42:39.943828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-07-12 12:42:39.943841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.130 [2024-07-12 12:42:39.948548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.130 [2024-07-12 12:42:39.948639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-07-12 12:42:39.948661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.130 [2024-07-12 12:42:39.953195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.130 [2024-07-12 12:42:39.953236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-07-12 12:42:39.953250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.130 [2024-07-12 12:42:39.957970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.130 [2024-07-12 12:42:39.958011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-07-12 12:42:39.958026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.130 [2024-07-12 12:42:39.962656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.130 [2024-07-12 12:42:39.962707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-07-12 12:42:39.962720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.130 [2024-07-12 12:42:39.967308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.130 [2024-07-12 12:42:39.967360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-07-12 12:42:39.967374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.130 [2024-07-12 12:42:39.972034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.130 [2024-07-12 12:42:39.972085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-07-12 12:42:39.972100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:39.976733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:39.976782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:39.976813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:39.981502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:39.981568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:39.981598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:39.986136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:39.986186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:39.986199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:39.990879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:39.990930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:39.990958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:39.995576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:39.995634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:39.995650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.000294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.000329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.000342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.004907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.004960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.004973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.009598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.009659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.009681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.014351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.014388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.014431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.019074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.019127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.019141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.023820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.023888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.023902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.028585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.028651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.028666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.033432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.033480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.033495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.038094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.038162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.038175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.042817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.042868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.042881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.047419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.047498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.047521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.052187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.052247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.052262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.056913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.056964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.056977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.061785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.061837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.061850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.066609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.066660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.066688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.071419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.071467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.071482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.076135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.076188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.076231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.080811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.080863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.080908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.085551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.085600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.085621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.090378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.131 [2024-07-12 12:42:40.090431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-07-12 12:42:40.090446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.131 [2024-07-12 12:42:40.095254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.095294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.095308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.100058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.100110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.100123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.104970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.105009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.105024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.109758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.109828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.109842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.114583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.114635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.114650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.119245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.119283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.119297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.123999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.124052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.124065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.128530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.128604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.128626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.133339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.133377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.133390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.138129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.138181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.138210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.142797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.142865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.142879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.147436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.147506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.147520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.152088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.152140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.152153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.156783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.156820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.156835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.161471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.161508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.161523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.166123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.166177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.166192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.170728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.170766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.170780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.175281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.175320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.175334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.179952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.180017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.180040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.184920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.184972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.184986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.189676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.189729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.189742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.194405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.194469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.194484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.132 [2024-07-12 12:42:40.199162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.132 [2024-07-12 12:42:40.199215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-07-12 12:42:40.199228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.401 [2024-07-12 12:42:40.204290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.401 [2024-07-12 12:42:40.204330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.401 [2024-07-12 12:42:40.204344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.401 [2024-07-12 12:42:40.209297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.401 [2024-07-12 12:42:40.209335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.401 [2024-07-12 12:42:40.209350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.401 [2024-07-12 12:42:40.214585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.401 [2024-07-12 12:42:40.214622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.401 [2024-07-12 12:42:40.214636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.401 [2024-07-12 12:42:40.220013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.401 [2024-07-12 12:42:40.220050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.401 [2024-07-12 12:42:40.220064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.401 [2024-07-12 12:42:40.224754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.401 [2024-07-12 12:42:40.224792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.401 [2024-07-12 12:42:40.224806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.401 [2024-07-12 12:42:40.229347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.401 [2024-07-12 12:42:40.229396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.401 [2024-07-12 12:42:40.229440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.401 [2024-07-12 12:42:40.234299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.401 [2024-07-12 12:42:40.234336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.401 [2024-07-12 12:42:40.234350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.401 [2024-07-12 12:42:40.239073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.401 [2024-07-12 12:42:40.239124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.401 [2024-07-12 12:42:40.239137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.401 [2024-07-12 12:42:40.243879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.401 [2024-07-12 12:42:40.243929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.401 [2024-07-12 12:42:40.243958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.401 [2024-07-12 12:42:40.248710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.401 [2024-07-12 12:42:40.248748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.401 [2024-07-12 12:42:40.248762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.401 [2024-07-12 12:42:40.253625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.401 [2024-07-12 12:42:40.253678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.401 [2024-07-12 12:42:40.253693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.401 [2024-07-12 12:42:40.258561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.401 [2024-07-12 12:42:40.258610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.401 [2024-07-12 12:42:40.258629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.401 [2024-07-12 12:42:40.264076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.401 [2024-07-12 12:42:40.264124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.401 [2024-07-12 12:42:40.264139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.401 [2024-07-12 12:42:40.269593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.269629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.269642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.274291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.274338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.274359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.278937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.278990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.279004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.283742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.283783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.283797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.289395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.289476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.289501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.295118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.295190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.295211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.299763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.299831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.299844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.304523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.304587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.304600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.309142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.309195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.309209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.314093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.314130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.314143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.318795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.318861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.318875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.323415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.323493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.323509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.328021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.328071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.328084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.332726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.332778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.332792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.337425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.337488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.337511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.342107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.342157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.342170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.346672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.346720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.346734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.351217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.351250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.351262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.355623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.355657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.355670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.360157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.360206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.360218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.364716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.364763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.364776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.369231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.369264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.369277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.373694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.373733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.373746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.378568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.378607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.378622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.383041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.383092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.383105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.387860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.387911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.387940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.392667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.392718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.392732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.397292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.397362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.397376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.402013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.402064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.402077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.406786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.406854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.406867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.411551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.411587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.411601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.416148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.416217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.416232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.420911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.420964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.420978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.425695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.425732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.425761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.430339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.430392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.430405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.435004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.435057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.435087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.439765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.439804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.439825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.444485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.444538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.444553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.449206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.449258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.449288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.453859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.453910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.453923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.458397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.458461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.458475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.463025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.463076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.463090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.402 [2024-07-12 12:42:40.467859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.402 [2024-07-12 12:42:40.467926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.402 [2024-07-12 12:42:40.467948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:14.660 [2024-07-12 12:42:40.472781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.660 [2024-07-12 12:42:40.472818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.660 [2024-07-12 12:42:40.472832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:14.660 [2024-07-12 12:42:40.477561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.660 [2024-07-12 12:42:40.477605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.660 [2024-07-12 12:42:40.477620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:14.660 [2024-07-12 12:42:40.482256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24f1ac0) 00:18:14.660 [2024-07-12 12:42:40.482310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.660 [2024-07-12 12:42:40.482324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.660 00:18:14.660 Latency(us) 00:18:14.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.660 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:18:14.660 nvme0n1 : 2.00 6521.70 815.21 0.00 0.00 2449.10 2085.24 9353.77 00:18:14.660 =================================================================================================================== 00:18:14.660 Total : 6521.70 815.21 0.00 0.00 2449.10 2085.24 9353.77 00:18:14.660 0 00:18:14.660 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:14.660 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:14.660 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:14.660 | .driver_specific 00:18:14.660 | .nvme_error 00:18:14.660 | .status_code 00:18:14.660 | .command_transient_transport_error' 00:18:14.660 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:14.917 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 421 > 0 )) 00:18:14.917 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80823 00:18:14.917 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80823 ']' 00:18:14.917 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80823 00:18:14.917 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:14.917 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:14.917 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80823 00:18:14.917 killing process with pid 80823 00:18:14.917 Received shutdown signal, test time was about 2.000000 seconds 00:18:14.917 00:18:14.917 Latency(us) 00:18:14.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.917 =================================================================================================================== 00:18:14.917 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:14.917 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:14.917 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:14.917 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80823' 00:18:14.917 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80823 00:18:14.917 12:42:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80823 00:18:15.174 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:18:15.174 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:15.174 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:15.174 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:18:15.174 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:18:15.174 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80883 00:18:15.174 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:18:15.174 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80883 /var/tmp/bperf.sock 00:18:15.174 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80883 ']' 00:18:15.174 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:15.174 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:15.174 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:15.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:15.174 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:15.174 12:42:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:15.174 [2024-07-12 12:42:41.156020] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:18:15.174 [2024-07-12 12:42:41.156469] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80883 ] 00:18:15.431 [2024-07-12 12:42:41.294806] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.431 [2024-07-12 12:42:41.415951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.431 [2024-07-12 12:42:41.471963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:16.364 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.364 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:16.364 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:16.364 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:16.364 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:16.364 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.364 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.364 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.364 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:16.364 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:16.929 nvme0n1 00:18:16.929 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:16.929 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.929 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:16.929 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.929 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:16.929 12:42:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:16.929 Running I/O for 2 seconds... 00:18:16.929 [2024-07-12 12:42:42.873049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190fef90 00:18:16.929 [2024-07-12 12:42:42.875623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.929 [2024-07-12 12:42:42.875672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:16.929 [2024-07-12 12:42:42.889295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190feb58 00:18:16.930 [2024-07-12 12:42:42.891860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.930 [2024-07-12 12:42:42.891902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:16.930 [2024-07-12 12:42:42.905462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190fe2e8 00:18:16.930 [2024-07-12 12:42:42.907940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.930 [2024-07-12 12:42:42.907980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:16.930 [2024-07-12 12:42:42.921541] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190fda78 00:18:16.930 [2024-07-12 12:42:42.924045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.930 [2024-07-12 12:42:42.924080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:16.930 [2024-07-12 12:42:42.937026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190fd208 00:18:16.930 [2024-07-12 12:42:42.939515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.930 [2024-07-12 12:42:42.939556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:16.930 [2024-07-12 12:42:42.952984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190fc998 00:18:16.930 [2024-07-12 12:42:42.955428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.930 [2024-07-12 12:42:42.955496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:16.930 [2024-07-12 12:42:42.968741] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190fc128 00:18:16.930 [2024-07-12 12:42:42.971023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.930 [2024-07-12 12:42:42.971057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:16.930 [2024-07-12 12:42:42.983754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190fb8b8 00:18:16.930 [2024-07-12 12:42:42.986190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.930 [2024-07-12 12:42:42.986251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:16.930 [2024-07-12 12:42:42.999267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190fb048 00:18:16.930 [2024-07-12 12:42:43.001619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:16.930 [2024-07-12 12:42:43.001670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:17.187 [2024-07-12 12:42:43.015388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190fa7d8 00:18:17.187 [2024-07-12 12:42:43.017813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.187 [2024-07-12 12:42:43.017857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:17.187 [2024-07-12 12:42:43.031483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f9f68 00:18:17.187 [2024-07-12 12:42:43.033844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.187 [2024-07-12 12:42:43.033888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:17.187 [2024-07-12 12:42:43.047680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f96f8 00:18:17.187 [2024-07-12 12:42:43.050012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.187 [2024-07-12 12:42:43.050051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:17.187 [2024-07-12 12:42:43.063659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f8e88 00:18:17.187 [2024-07-12 12:42:43.065971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.187 [2024-07-12 12:42:43.066006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:17.187 [2024-07-12 12:42:43.079662] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f8618 00:18:17.187 [2024-07-12 12:42:43.081917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.187 [2024-07-12 12:42:43.081953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:17.187 [2024-07-12 12:42:43.095402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f7da8 00:18:17.187 [2024-07-12 12:42:43.097639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.187 [2024-07-12 12:42:43.097673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:17.187 [2024-07-12 12:42:43.111299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f7538 00:18:17.187 [2024-07-12 12:42:43.113521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.187 [2024-07-12 12:42:43.113553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:17.187 [2024-07-12 12:42:43.127081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f6cc8 00:18:17.187 [2024-07-12 12:42:43.129328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.187 [2024-07-12 12:42:43.129365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.187 [2024-07-12 12:42:43.143031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f6458 00:18:17.187 [2024-07-12 12:42:43.145207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.187 [2024-07-12 12:42:43.145243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:17.187 [2024-07-12 12:42:43.158777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f5be8 00:18:17.187 [2024-07-12 12:42:43.160963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.187 [2024-07-12 12:42:43.160999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:17.187 [2024-07-12 12:42:43.174596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f5378 00:18:17.187 [2024-07-12 12:42:43.176735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.187 [2024-07-12 12:42:43.176769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:17.187 [2024-07-12 12:42:43.190477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f4b08 00:18:17.187 [2024-07-12 12:42:43.192598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.187 [2024-07-12 12:42:43.192634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:17.187 [2024-07-12 12:42:43.206316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f4298 00:18:17.187 [2024-07-12 12:42:43.208457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.187 [2024-07-12 12:42:43.208491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:17.187 [2024-07-12 12:42:43.222268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f3a28 00:18:17.187 [2024-07-12 12:42:43.224363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.187 [2024-07-12 12:42:43.224429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:17.187 [2024-07-12 12:42:43.238512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f31b8 00:18:17.187 [2024-07-12 12:42:43.240665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.187 [2024-07-12 12:42:43.240717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:17.187 [2024-07-12 12:42:43.254354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f2948 00:18:17.187 [2024-07-12 12:42:43.256409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.187 [2024-07-12 12:42:43.256472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:17.445 [2024-07-12 12:42:43.270225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f20d8 00:18:17.445 [2024-07-12 12:42:43.272276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.445 [2024-07-12 12:42:43.272309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:17.445 [2024-07-12 12:42:43.285965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f1868 00:18:17.445 [2024-07-12 12:42:43.287979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.445 [2024-07-12 12:42:43.288014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:17.445 [2024-07-12 12:42:43.301866] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f0ff8 00:18:17.445 [2024-07-12 12:42:43.303945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.445 [2024-07-12 12:42:43.303978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:17.445 [2024-07-12 12:42:43.318200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f0788 00:18:17.445 [2024-07-12 12:42:43.320281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.445 [2024-07-12 12:42:43.320341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:17.445 [2024-07-12 12:42:43.334066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190eff18 00:18:17.445 [2024-07-12 12:42:43.336094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.445 [2024-07-12 12:42:43.336133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:17.445 [2024-07-12 12:42:43.349733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190ef6a8 00:18:17.445 [2024-07-12 12:42:43.351750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.445 [2024-07-12 12:42:43.351808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:17.445 [2024-07-12 12:42:43.365115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190eee38 00:18:17.445 [2024-07-12 12:42:43.367067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.445 [2024-07-12 12:42:43.367099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:17.445 [2024-07-12 12:42:43.380296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190ee5c8 00:18:17.445 [2024-07-12 12:42:43.382335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.445 [2024-07-12 12:42:43.382372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.445 [2024-07-12 12:42:43.396025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190edd58 00:18:17.445 [2024-07-12 12:42:43.397943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.445 [2024-07-12 12:42:43.397996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:17.445 [2024-07-12 12:42:43.412421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190ed4e8 00:18:17.445 [2024-07-12 12:42:43.414341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.445 [2024-07-12 12:42:43.414387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:17.445 [2024-07-12 12:42:43.428872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190ecc78 00:18:17.445 [2024-07-12 12:42:43.430765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.445 [2024-07-12 12:42:43.430823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:17.445 [2024-07-12 12:42:43.444972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190ec408 00:18:17.445 [2024-07-12 12:42:43.446808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.445 [2024-07-12 12:42:43.446845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:17.445 [2024-07-12 12:42:43.460892] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190ebb98 00:18:17.445 [2024-07-12 12:42:43.462734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.445 [2024-07-12 12:42:43.462773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:17.445 [2024-07-12 12:42:43.476680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190eb328 00:18:17.445 [2024-07-12 12:42:43.478448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.445 [2024-07-12 12:42:43.478491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:17.445 [2024-07-12 12:42:43.492451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190eaab8 00:18:17.445 [2024-07-12 12:42:43.494198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.445 [2024-07-12 12:42:43.494235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:17.445 [2024-07-12 12:42:43.508237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190ea248 00:18:17.445 [2024-07-12 12:42:43.510032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.445 [2024-07-12 12:42:43.510072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:17.704 [2024-07-12 12:42:43.524748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e99d8 00:18:17.704 [2024-07-12 12:42:43.526564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.704 [2024-07-12 12:42:43.526623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:17.704 [2024-07-12 12:42:43.540685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e9168 00:18:17.704 [2024-07-12 12:42:43.542474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.704 [2024-07-12 12:42:43.542520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:17.704 [2024-07-12 12:42:43.556598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e88f8 00:18:17.704 [2024-07-12 12:42:43.558272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.704 [2024-07-12 12:42:43.558311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:17.704 [2024-07-12 12:42:43.572305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e8088 00:18:17.704 [2024-07-12 12:42:43.574115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.704 [2024-07-12 12:42:43.574149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:17.704 [2024-07-12 12:42:43.588092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e7818 00:18:17.704 [2024-07-12 12:42:43.589758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.704 [2024-07-12 12:42:43.589806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:17.704 [2024-07-12 12:42:43.603684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e6fa8 00:18:17.704 [2024-07-12 12:42:43.605275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.704 [2024-07-12 12:42:43.605310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:17.704 [2024-07-12 12:42:43.619491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e6738 00:18:17.704 [2024-07-12 12:42:43.621096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.704 [2024-07-12 12:42:43.621139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:17.704 [2024-07-12 12:42:43.636208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e5ec8 00:18:17.704 [2024-07-12 12:42:43.637919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.704 [2024-07-12 12:42:43.637964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.704 [2024-07-12 12:42:43.652472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e5658 00:18:17.704 [2024-07-12 12:42:43.654049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.704 [2024-07-12 12:42:43.654087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:17.704 [2024-07-12 12:42:43.668679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e4de8 00:18:17.704 [2024-07-12 12:42:43.670208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.704 [2024-07-12 12:42:43.670247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:17.704 [2024-07-12 12:42:43.685560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e4578 00:18:17.704 [2024-07-12 12:42:43.687367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.704 [2024-07-12 12:42:43.687421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:17.704 [2024-07-12 12:42:43.701513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e3d08 00:18:17.704 [2024-07-12 12:42:43.703097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.704 [2024-07-12 12:42:43.703138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:17.704 [2024-07-12 12:42:43.717804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e3498 00:18:17.704 [2024-07-12 12:42:43.719633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.704 [2024-07-12 12:42:43.719673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:17.704 [2024-07-12 12:42:43.734127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e2c28 00:18:17.704 [2024-07-12 12:42:43.735720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.704 [2024-07-12 12:42:43.735764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:17.704 [2024-07-12 12:42:43.749973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e23b8 00:18:17.704 [2024-07-12 12:42:43.751467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.704 [2024-07-12 12:42:43.751507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:17.704 [2024-07-12 12:42:43.765818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e1b48 00:18:17.704 [2024-07-12 12:42:43.767272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.704 [2024-07-12 12:42:43.767311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:17.963 [2024-07-12 12:42:43.782229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e12d8 00:18:17.963 [2024-07-12 12:42:43.783669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.963 [2024-07-12 12:42:43.783708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:17.963 [2024-07-12 12:42:43.798116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e0a68 00:18:17.963 [2024-07-12 12:42:43.799598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.963 [2024-07-12 12:42:43.799638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:17.963 [2024-07-12 12:42:43.813974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e01f8 00:18:17.963 [2024-07-12 12:42:43.815503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.963 [2024-07-12 12:42:43.815545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:17.963 [2024-07-12 12:42:43.829918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190df988 00:18:17.963 [2024-07-12 12:42:43.831308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.963 [2024-07-12 12:42:43.831346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:17.963 [2024-07-12 12:42:43.845853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190df118 00:18:17.963 [2024-07-12 12:42:43.847277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.963 [2024-07-12 12:42:43.847319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:17.963 [2024-07-12 12:42:43.861752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190de8a8 00:18:17.963 [2024-07-12 12:42:43.863162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.963 [2024-07-12 12:42:43.863218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:17.963 [2024-07-12 12:42:43.877548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190de038 00:18:17.963 [2024-07-12 12:42:43.878865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.963 [2024-07-12 12:42:43.878907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:17.963 [2024-07-12 12:42:43.900746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190de038 00:18:17.963 [2024-07-12 12:42:43.903273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.963 [2024-07-12 12:42:43.903316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.963 [2024-07-12 12:42:43.917010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190de8a8 00:18:17.963 [2024-07-12 12:42:43.919533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.963 [2024-07-12 12:42:43.919583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:17.963 [2024-07-12 12:42:43.932907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190df118 00:18:17.964 [2024-07-12 12:42:43.935394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.964 [2024-07-12 12:42:43.935440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:17.964 [2024-07-12 12:42:43.948771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190df988 00:18:17.964 [2024-07-12 12:42:43.951197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.964 [2024-07-12 12:42:43.951235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:17.964 [2024-07-12 12:42:43.964518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e01f8 00:18:17.964 [2024-07-12 12:42:43.966907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.964 [2024-07-12 12:42:43.966943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:17.964 [2024-07-12 12:42:43.980336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e0a68 00:18:17.964 [2024-07-12 12:42:43.982786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.964 [2024-07-12 12:42:43.982831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:17.964 [2024-07-12 12:42:43.996374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e12d8 00:18:17.964 [2024-07-12 12:42:43.998768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.964 [2024-07-12 12:42:43.998804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:17.964 [2024-07-12 12:42:44.012167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e1b48 00:18:17.964 [2024-07-12 12:42:44.014553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.964 [2024-07-12 12:42:44.014597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:17.964 [2024-07-12 12:42:44.027984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e23b8 00:18:17.964 [2024-07-12 12:42:44.030313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.964 [2024-07-12 12:42:44.030351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:18.222 [2024-07-12 12:42:44.043890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e2c28 00:18:18.222 [2024-07-12 12:42:44.046276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.222 [2024-07-12 12:42:44.046315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:18.222 [2024-07-12 12:42:44.060082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e3498 00:18:18.222 [2024-07-12 12:42:44.062439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.222 [2024-07-12 12:42:44.062518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:18.222 [2024-07-12 12:42:44.076142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e3d08 00:18:18.222 [2024-07-12 12:42:44.078497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.222 [2024-07-12 12:42:44.078531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:18.222 [2024-07-12 12:42:44.091970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e4578 00:18:18.222 [2024-07-12 12:42:44.094272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.222 [2024-07-12 12:42:44.094305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:18.222 [2024-07-12 12:42:44.108127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e4de8 00:18:18.222 [2024-07-12 12:42:44.110455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.222 [2024-07-12 12:42:44.110496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:18.222 [2024-07-12 12:42:44.124353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e5658 00:18:18.222 [2024-07-12 12:42:44.126650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.222 [2024-07-12 12:42:44.126683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:18.222 [2024-07-12 12:42:44.140735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e5ec8 00:18:18.222 [2024-07-12 12:42:44.143090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.222 [2024-07-12 12:42:44.143126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:18.222 [2024-07-12 12:42:44.157013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e6738 00:18:18.222 [2024-07-12 12:42:44.159287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.222 [2024-07-12 12:42:44.159327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:18.222 [2024-07-12 12:42:44.173243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e6fa8 00:18:18.222 [2024-07-12 12:42:44.175518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.222 [2024-07-12 12:42:44.175569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:18.222 [2024-07-12 12:42:44.189241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e7818 00:18:18.223 [2024-07-12 12:42:44.191377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.223 [2024-07-12 12:42:44.191424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:18.223 [2024-07-12 12:42:44.205357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e8088 00:18:18.223 [2024-07-12 12:42:44.207522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.223 [2024-07-12 12:42:44.207562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:18.223 [2024-07-12 12:42:44.221350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e88f8 00:18:18.223 [2024-07-12 12:42:44.223494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.223 [2024-07-12 12:42:44.223531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:18.223 [2024-07-12 12:42:44.237382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e9168 00:18:18.223 [2024-07-12 12:42:44.239503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.223 [2024-07-12 12:42:44.239544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:18.223 [2024-07-12 12:42:44.253205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190e99d8 00:18:18.223 [2024-07-12 12:42:44.255301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.223 [2024-07-12 12:42:44.255338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:18.223 [2024-07-12 12:42:44.269232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190ea248 00:18:18.223 [2024-07-12 12:42:44.271290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.223 [2024-07-12 12:42:44.271327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:18.223 [2024-07-12 12:42:44.285253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190eaab8 00:18:18.223 [2024-07-12 12:42:44.287303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.223 [2024-07-12 12:42:44.287346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:18.481 [2024-07-12 12:42:44.301211] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190eb328 00:18:18.481 [2024-07-12 12:42:44.303238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.481 [2024-07-12 12:42:44.303277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:18.481 [2024-07-12 12:42:44.317066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190ebb98 00:18:18.481 [2024-07-12 12:42:44.319061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.481 [2024-07-12 12:42:44.319094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:18.481 [2024-07-12 12:42:44.333022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190ec408 00:18:18.481 [2024-07-12 12:42:44.335029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.481 [2024-07-12 12:42:44.335079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:18.482 [2024-07-12 12:42:44.348930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190ecc78 00:18:18.482 [2024-07-12 12:42:44.350935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.482 [2024-07-12 12:42:44.350969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:18.482 [2024-07-12 12:42:44.364921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190ed4e8 00:18:18.482 [2024-07-12 12:42:44.366833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.482 [2024-07-12 12:42:44.366867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:18.482 [2024-07-12 12:42:44.381049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190edd58 00:18:18.482 [2024-07-12 12:42:44.382939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.482 [2024-07-12 12:42:44.382972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:18.482 [2024-07-12 12:42:44.396780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190ee5c8 00:18:18.482 [2024-07-12 12:42:44.398658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.482 [2024-07-12 12:42:44.398692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:18.482 [2024-07-12 12:42:44.412649] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190eee38 00:18:18.482 [2024-07-12 12:42:44.414500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.482 [2024-07-12 12:42:44.414539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:18.482 [2024-07-12 12:42:44.428379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190ef6a8 00:18:18.482 [2024-07-12 12:42:44.430229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.482 [2024-07-12 12:42:44.430267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:18.482 [2024-07-12 12:42:44.444340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190eff18 00:18:18.482 [2024-07-12 12:42:44.446182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.482 [2024-07-12 12:42:44.446218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:18.482 [2024-07-12 12:42:44.460272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f0788 00:18:18.482 [2024-07-12 12:42:44.462087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.482 [2024-07-12 12:42:44.462123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:18.482 [2024-07-12 12:42:44.476279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f0ff8 00:18:18.482 [2024-07-12 12:42:44.478082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.482 [2024-07-12 12:42:44.478135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:18.482 [2024-07-12 12:42:44.492284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f1868 00:18:18.482 [2024-07-12 12:42:44.494062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.482 [2024-07-12 12:42:44.494096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:18.482 [2024-07-12 12:42:44.508292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f20d8 00:18:18.482 [2024-07-12 12:42:44.510064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.482 [2024-07-12 12:42:44.510101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:18.482 [2024-07-12 12:42:44.524064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f2948 00:18:18.482 [2024-07-12 12:42:44.525785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.482 [2024-07-12 12:42:44.525820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:18.482 [2024-07-12 12:42:44.539785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f31b8 00:18:18.482 [2024-07-12 12:42:44.541500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.482 [2024-07-12 12:42:44.541536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:18.742 [2024-07-12 12:42:44.555578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f3a28 00:18:18.742 [2024-07-12 12:42:44.557270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.742 [2024-07-12 12:42:44.557322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:18.742 [2024-07-12 12:42:44.571558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f4298 00:18:18.742 [2024-07-12 12:42:44.573213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.742 [2024-07-12 12:42:44.573252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:18.742 [2024-07-12 12:42:44.587221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f4b08 00:18:18.742 [2024-07-12 12:42:44.588880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.742 [2024-07-12 12:42:44.588914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:18.742 [2024-07-12 12:42:44.602918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f5378 00:18:18.742 [2024-07-12 12:42:44.604545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.742 [2024-07-12 12:42:44.604580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:18.742 [2024-07-12 12:42:44.618602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f5be8 00:18:18.742 [2024-07-12 12:42:44.620226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.742 [2024-07-12 12:42:44.620262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:18.742 [2024-07-12 12:42:44.634503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f6458 00:18:18.742 [2024-07-12 12:42:44.636106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.742 [2024-07-12 12:42:44.636144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:18.742 [2024-07-12 12:42:44.650391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f6cc8 00:18:18.742 [2024-07-12 12:42:44.652025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.742 [2024-07-12 12:42:44.652060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:18.742 [2024-07-12 12:42:44.666214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f7538 00:18:18.742 [2024-07-12 12:42:44.667787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.742 [2024-07-12 12:42:44.667823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:18.742 [2024-07-12 12:42:44.681982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f7da8 00:18:18.742 [2024-07-12 12:42:44.683555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.742 [2024-07-12 12:42:44.683596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:18.742 [2024-07-12 12:42:44.697919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f8618 00:18:18.742 [2024-07-12 12:42:44.699457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.742 [2024-07-12 12:42:44.699496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:18.742 [2024-07-12 12:42:44.713609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f8e88 00:18:18.742 [2024-07-12 12:42:44.715089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.742 [2024-07-12 12:42:44.715125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:18.742 [2024-07-12 12:42:44.729421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f96f8 00:18:18.742 [2024-07-12 12:42:44.730933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.742 [2024-07-12 12:42:44.730967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:18.742 [2024-07-12 12:42:44.745368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190f9f68 00:18:18.742 [2024-07-12 12:42:44.746877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.743 [2024-07-12 12:42:44.746914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:18.743 [2024-07-12 12:42:44.761300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190fa7d8 00:18:18.743 [2024-07-12 12:42:44.762738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.743 [2024-07-12 12:42:44.762773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:18.743 [2024-07-12 12:42:44.777113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190fb048 00:18:18.743 [2024-07-12 12:42:44.778563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.743 [2024-07-12 12:42:44.778598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:18.743 [2024-07-12 12:42:44.793153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190fb8b8 00:18:18.743 [2024-07-12 12:42:44.794617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.743 [2024-07-12 12:42:44.794655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:18.743 [2024-07-12 12:42:44.809130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190fc128 00:18:18.743 [2024-07-12 12:42:44.810521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:15864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:18.743 [2024-07-12 12:42:44.810557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:19.001 [2024-07-12 12:42:44.825155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190fc998 00:18:19.001 [2024-07-12 12:42:44.826574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.001 [2024-07-12 12:42:44.826607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:19.001 [2024-07-12 12:42:44.841436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5360) with pdu=0x2000190fd208 00:18:19.001 [2024-07-12 12:42:44.842846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:19.001 [2024-07-12 12:42:44.842895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:19.001 00:18:19.001 Latency(us) 00:18:19.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.001 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:19.001 nvme0n1 : 2.00 15862.31 61.96 0.00 0.00 8061.06 4230.05 30980.65 00:18:19.001 =================================================================================================================== 00:18:19.001 Total : 15862.31 61.96 0.00 0.00 8061.06 4230.05 30980.65 00:18:19.001 0 00:18:19.002 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:19.002 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:19.002 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:19.002 | .driver_specific 00:18:19.002 | .nvme_error 00:18:19.002 | .status_code 00:18:19.002 | .command_transient_transport_error' 00:18:19.002 12:42:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:19.260 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 124 > 0 )) 00:18:19.260 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80883 00:18:19.260 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80883 ']' 00:18:19.260 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80883 00:18:19.260 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:19.260 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:19.260 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80883 00:18:19.260 killing process with pid 80883 00:18:19.260 Received shutdown signal, test time was about 2.000000 seconds 00:18:19.260 00:18:19.260 Latency(us) 00:18:19.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.260 =================================================================================================================== 00:18:19.260 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:19.260 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:19.260 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:19.260 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80883' 00:18:19.261 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80883 00:18:19.261 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80883 00:18:19.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:19.519 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:18:19.519 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:18:19.519 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:18:19.519 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:18:19.519 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:18:19.519 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80944 00:18:19.519 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:19.519 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80944 /var/tmp/bperf.sock 00:18:19.519 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 80944 ']' 00:18:19.519 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:19.519 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.519 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:19.519 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.519 12:42:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:19.519 [2024-07-12 12:42:45.413962] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:18:19.519 [2024-07-12 12:42:45.414252] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80944 ] 00:18:19.519 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:19.519 Zero copy mechanism will not be used. 00:18:19.519 [2024-07-12 12:42:45.547100] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.777 [2024-07-12 12:42:45.672752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.777 [2024-07-12 12:42:45.732850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:20.344 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.344 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:18:20.344 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:20.344 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:20.602 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:20.602 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.602 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:20.602 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.602 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:20.602 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:20.860 nvme0n1 00:18:20.860 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:20.860 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.860 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:20.860 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.860 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:20.860 12:42:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:21.119 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:21.119 Zero copy mechanism will not be used. 00:18:21.119 Running I/O for 2 seconds... 00:18:21.119 [2024-07-12 12:42:47.050270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.119 [2024-07-12 12:42:47.050667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.119 [2024-07-12 12:42:47.050699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.119 [2024-07-12 12:42:47.055843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.119 [2024-07-12 12:42:47.056178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.119 [2024-07-12 12:42:47.056208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.119 [2024-07-12 12:42:47.061206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.119 [2024-07-12 12:42:47.061552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.119 [2024-07-12 12:42:47.061581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.119 [2024-07-12 12:42:47.066373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.119 [2024-07-12 12:42:47.066706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.119 [2024-07-12 12:42:47.066735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.119 [2024-07-12 12:42:47.071504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.119 [2024-07-12 12:42:47.071812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.119 [2024-07-12 12:42:47.071841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.119 [2024-07-12 12:42:47.076690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.119 [2024-07-12 12:42:47.076985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.119 [2024-07-12 12:42:47.077014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.119 [2024-07-12 12:42:47.081751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.119 [2024-07-12 12:42:47.082049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.119 [2024-07-12 12:42:47.082079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.119 [2024-07-12 12:42:47.086835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.119 [2024-07-12 12:42:47.087127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.119 [2024-07-12 12:42:47.087157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.119 [2024-07-12 12:42:47.091916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.119 [2024-07-12 12:42:47.092208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.119 [2024-07-12 12:42:47.092237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.119 [2024-07-12 12:42:47.096956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.119 [2024-07-12 12:42:47.097253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.119 [2024-07-12 12:42:47.097283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.119 [2024-07-12 12:42:47.102023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.119 [2024-07-12 12:42:47.102320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.119 [2024-07-12 12:42:47.102350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.119 [2024-07-12 12:42:47.107069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.119 [2024-07-12 12:42:47.107364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.119 [2024-07-12 12:42:47.107393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.119 [2024-07-12 12:42:47.112174] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.119 [2024-07-12 12:42:47.112504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.119 [2024-07-12 12:42:47.112544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.119 [2024-07-12 12:42:47.117350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.119 [2024-07-12 12:42:47.117716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.120 [2024-07-12 12:42:47.117749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.120 [2024-07-12 12:42:47.122644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.120 [2024-07-12 12:42:47.122956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.120 [2024-07-12 12:42:47.122983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.120 [2024-07-12 12:42:47.127850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.120 [2024-07-12 12:42:47.128152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.120 [2024-07-12 12:42:47.128180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.120 [2024-07-12 12:42:47.132993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.120 [2024-07-12 12:42:47.133294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.120 [2024-07-12 12:42:47.133321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.120 [2024-07-12 12:42:47.138195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.120 [2024-07-12 12:42:47.138511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.120 [2024-07-12 12:42:47.138539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.120 [2024-07-12 12:42:47.143304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.120 [2024-07-12 12:42:47.143619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.120 [2024-07-12 12:42:47.143647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.120 [2024-07-12 12:42:47.148628] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.120 [2024-07-12 12:42:47.148943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.120 [2024-07-12 12:42:47.148971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.120 [2024-07-12 12:42:47.153962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.120 [2024-07-12 12:42:47.154252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.120 [2024-07-12 12:42:47.154280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.120 [2024-07-12 12:42:47.159162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.120 [2024-07-12 12:42:47.159464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.120 [2024-07-12 12:42:47.159493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.120 [2024-07-12 12:42:47.163886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.120 [2024-07-12 12:42:47.163995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.120 [2024-07-12 12:42:47.164018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.120 [2024-07-12 12:42:47.169047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.120 [2024-07-12 12:42:47.169133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.120 [2024-07-12 12:42:47.169154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.120 [2024-07-12 12:42:47.174122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.120 [2024-07-12 12:42:47.174192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.120 [2024-07-12 12:42:47.174214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.120 [2024-07-12 12:42:47.179176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.120 [2024-07-12 12:42:47.179260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.120 [2024-07-12 12:42:47.179282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.120 [2024-07-12 12:42:47.184296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.120 [2024-07-12 12:42:47.184364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.120 [2024-07-12 12:42:47.184386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.120 [2024-07-12 12:42:47.189590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.120 [2024-07-12 12:42:47.189664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.120 [2024-07-12 12:42:47.189687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.380 [2024-07-12 12:42:47.194828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.380 [2024-07-12 12:42:47.194911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.380 [2024-07-12 12:42:47.194934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.380 [2024-07-12 12:42:47.199841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.380 [2024-07-12 12:42:47.199921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.380 [2024-07-12 12:42:47.199944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.380 [2024-07-12 12:42:47.205072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.380 [2024-07-12 12:42:47.205175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.380 [2024-07-12 12:42:47.205198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.380 [2024-07-12 12:42:47.210093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.380 [2024-07-12 12:42:47.210175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.380 [2024-07-12 12:42:47.210196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.380 [2024-07-12 12:42:47.215151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.215253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.215275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.220199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.220280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.220302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.225268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.225348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.225370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.230365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.230484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.230507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.235350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.235465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.235488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.240344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.240427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.240467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.245422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.245517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.245539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.250516] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.250589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.250611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.255491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.255559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.255581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.260548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.260633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.260655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.265580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.265647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.265669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.270661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.270748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.270770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.275791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.275909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.275931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.280929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.281002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.281025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.286150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.286238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.286261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.291227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.291298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.291322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.296234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.296318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.296340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.301309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.301375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.301398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.306260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.306345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.306366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.311438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.311528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.311551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.316525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.316622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.316644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.321587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.321672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.321693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.326609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.326682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.326704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.331513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.331584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.331606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.336467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.336549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.336570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.341405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.341518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.341541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.346380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.346493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.346515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.351386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.351480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.351502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.356256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.356358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.381 [2024-07-12 12:42:47.356380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.381 [2024-07-12 12:42:47.361366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.381 [2024-07-12 12:42:47.361462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.361485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.366394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.366491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.366514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.371585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.371643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.371665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.376629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.376696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.376719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.381688] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.381754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.381776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.386763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.386845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.386866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.391910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.392008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.392029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.396977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.397075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.397096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.402208] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.402280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.402302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.407279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.407351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.407373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.412442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.412695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.412719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.417661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.417732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.417755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.422672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.422745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.422767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.427801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.427869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.427891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.432804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.432876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.432898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.437909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.437976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.437999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.442914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.442984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.443006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.382 [2024-07-12 12:42:47.447920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.382 [2024-07-12 12:42:47.447986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.382 [2024-07-12 12:42:47.448008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.452981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.453050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.453075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.458011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.458083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.458107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.463014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.463087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.463109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.468064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.468135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.468158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.473058] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.473131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.473162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.478097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.478181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.478204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.483264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.483332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.483355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.488384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.488472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.488496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.493430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.493506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.493530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.498518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.498591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.498614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.503571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.503638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.503661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.508643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.508729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.508752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.513746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.513847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.513868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.518963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.519047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.519069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.524231] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.524335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.524358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.529289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.529362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.529385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.534327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.534395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.534434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.539360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.539454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.539477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.544491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.544559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.544581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.549624] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.549720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.549742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.554747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.554814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.554836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.559982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.560081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.560103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.565042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.565142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.565164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.570120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.650 [2024-07-12 12:42:47.570201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.650 [2024-07-12 12:42:47.570223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.650 [2024-07-12 12:42:47.575261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.575348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.575371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.580325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.580410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.580432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.585362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.585448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.585470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.590388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.590487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.590510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.595357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.595482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.595505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.600341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.600426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.600448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.605342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.605409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.605432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.610419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.610516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.610539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.615374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.615484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.615507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.620685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.620753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.620775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.625931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.625997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.626020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.630995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.631065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.631087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.635948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.636052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.636074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.641007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.641091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.641113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.646094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.646175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.646198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.651310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.651378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.651400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.656456] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.656554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.656576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.661675] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.661776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.661797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.666832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.666927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.666948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.671890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.671963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.671985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.677005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.677090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.677112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.682195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.682266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.682288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.687269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.687341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.687364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.692289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.692363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.692385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.697247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.697314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.697336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.702318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.702390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.702412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.707307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.707377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.707412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.712360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.712446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.712469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.651 [2024-07-12 12:42:47.717336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.651 [2024-07-12 12:42:47.717418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.651 [2024-07-12 12:42:47.717440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.722368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.722449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.722472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.727462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.727540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.727562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.732590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.732690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.732712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.737736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.737834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.737855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.742934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.743007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.743030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.747929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.747997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.748020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.752970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.753072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.753094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.758031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.758116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.758139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.763177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.763251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.763273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.768190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.768256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.768278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.773247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.773317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.773340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.778251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.778322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.778343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.783414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.783509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.783532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.788458] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.788529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.788552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.793575] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.793643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.793665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.798589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.798655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.798677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.803597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.803664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.803686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.808691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.808764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.912 [2024-07-12 12:42:47.808786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.912 [2024-07-12 12:42:47.813760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.912 [2024-07-12 12:42:47.813828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.813851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.818804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.818870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.818892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.823899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.823967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.823989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.828946] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.829014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.829037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.833965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.834051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.834073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.838975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.839045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.839072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.843984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.844070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.844092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.849048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.849119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.849141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.854119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.854201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.854223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.859210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.859279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.859301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.864265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.864332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.864354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.869304] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.869376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.869398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.874436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.874504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.874527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.879438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.879518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.879540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.884374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.884451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.884474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.889369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.889446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.889469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.894350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.894436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.894459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.899366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.899460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.899483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.904394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.904476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.904498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.909350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.909435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.909458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.914344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.914428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.914450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.919314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.919386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.919421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.924307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.924376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.924398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.929275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.929346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.929369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.934238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.934311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.934333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.939210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.939279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.939301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.944252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.944326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.944349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.949255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.949321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.949343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.954237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.954306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.954328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.913 [2024-07-12 12:42:47.959215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.913 [2024-07-12 12:42:47.959282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.913 [2024-07-12 12:42:47.959304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:21.914 [2024-07-12 12:42:47.964250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.914 [2024-07-12 12:42:47.964323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.914 [2024-07-12 12:42:47.964346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:21.914 [2024-07-12 12:42:47.969232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.914 [2024-07-12 12:42:47.969301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.914 [2024-07-12 12:42:47.969324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:21.914 [2024-07-12 12:42:47.974243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.914 [2024-07-12 12:42:47.974313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.914 [2024-07-12 12:42:47.974335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:21.914 [2024-07-12 12:42:47.979204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:21.914 [2024-07-12 12:42:47.979273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:21.914 [2024-07-12 12:42:47.979295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.173 [2024-07-12 12:42:47.984219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.173 [2024-07-12 12:42:47.984289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.173 [2024-07-12 12:42:47.984311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.173 [2024-07-12 12:42:47.989255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.173 [2024-07-12 12:42:47.989328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.173 [2024-07-12 12:42:47.989351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.173 [2024-07-12 12:42:47.994248] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.173 [2024-07-12 12:42:47.994320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.173 [2024-07-12 12:42:47.994342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.173 [2024-07-12 12:42:47.999233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.173 [2024-07-12 12:42:47.999301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.173 [2024-07-12 12:42:47.999323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.173 [2024-07-12 12:42:48.004215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.173 [2024-07-12 12:42:48.004288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.173 [2024-07-12 12:42:48.004310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.173 [2024-07-12 12:42:48.009204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.173 [2024-07-12 12:42:48.009277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.173 [2024-07-12 12:42:48.009299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.014201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.014270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.014292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.019161] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.019232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.019255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.024116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.024188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.024210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.029094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.029164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.029186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.034094] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.034167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.034190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.039099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.039167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.039189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.044111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.044182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.044205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.049068] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.049135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.049157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.054086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.054159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.054182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.059083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.059151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.059173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.064059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.064134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.064157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.069078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.069150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.069172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.074095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.074168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.074190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.079082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.079152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.079174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.084047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.084118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.084140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.089023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.089090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.089113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.094033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.094107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.094130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.098993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.099066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.099088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.103984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.104054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.104076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.108951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.109020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.109041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.113932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.114003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.114025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.118943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.119015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.119037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.123987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.124057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.124079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.128960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.129032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.129054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.133952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.134023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.134045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.138994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.139063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.139086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.143978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.144045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.144068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.148919] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.174 [2024-07-12 12:42:48.148986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.174 [2024-07-12 12:42:48.149008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.174 [2024-07-12 12:42:48.153878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.153945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.153967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.158843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.158916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.158938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.163835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.163911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.163934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.168863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.168930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.168951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.173891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.173959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.173982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.178952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.179024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.179047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.183948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.184024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.184047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.188947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.189019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.189041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.193929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.193997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.194019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.198929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.198995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.199019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.204050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.204120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.204143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.209164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.209231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.209253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.214131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.214204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.214226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.219163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.219235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.219258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.224176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.224248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.224270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.229232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.229301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.229323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.234214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.234283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.234305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.239189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.239259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.239286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.175 [2024-07-12 12:42:48.244158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.175 [2024-07-12 12:42:48.244225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.175 [2024-07-12 12:42:48.244247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.434 [2024-07-12 12:42:48.249173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.434 [2024-07-12 12:42:48.249245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.434 [2024-07-12 12:42:48.249267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.434 [2024-07-12 12:42:48.254126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.434 [2024-07-12 12:42:48.254195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.434 [2024-07-12 12:42:48.254218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.434 [2024-07-12 12:42:48.259090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.434 [2024-07-12 12:42:48.259157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.434 [2024-07-12 12:42:48.259179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.434 [2024-07-12 12:42:48.264097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.434 [2024-07-12 12:42:48.264169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.434 [2024-07-12 12:42:48.264191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.434 [2024-07-12 12:42:48.269061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.434 [2024-07-12 12:42:48.269128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.269150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.274067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.274140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.274162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.279050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.279118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.279141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.284037] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.284110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.284133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.289051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.289119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.289141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.294039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.294110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.294132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.299034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.299101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.299123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.304047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.304116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.304140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.309077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.309149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.309172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.314109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.314182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.314204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.319120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.319187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.319210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.324124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.324193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.324215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.329088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.329161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.329184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.334073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.334140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.334162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.339043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.339109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.339131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.344065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.344138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.344160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.349011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.349076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.349098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.353977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.354044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.354066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.358969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.359042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.359063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.364019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.364085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.364108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.369032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.369099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.369121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.374033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.374109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.374131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.379075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.379147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.379169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.384081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.384154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.384176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.389033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.389103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.389125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.394076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.394149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.394171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.399065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.399137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.399166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.404100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.404168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.404190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.409128] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.435 [2024-07-12 12:42:48.409196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.435 [2024-07-12 12:42:48.409219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.435 [2024-07-12 12:42:48.414143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.414212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.414234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.419130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.419198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.419220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.424167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.424233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.424256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.429166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.429239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.429262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.434175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.434248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.434271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.439223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.439292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.439314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.444275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.444346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.444369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.449345] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.449435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.449458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.454348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.454432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.454454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.459309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.459382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.459417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.464316] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.464383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.464418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.469323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.469395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.469431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.474325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.474393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.474429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.479305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.479373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.479395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.484337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.484419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.484441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.489338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.489419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.489442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.494349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.494442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.494465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.499339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.499425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.499457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.436 [2024-07-12 12:42:48.504330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.436 [2024-07-12 12:42:48.504415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.436 [2024-07-12 12:42:48.504437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.695 [2024-07-12 12:42:48.509338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.695 [2024-07-12 12:42:48.509421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.695 [2024-07-12 12:42:48.509445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.695 [2024-07-12 12:42:48.514328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.695 [2024-07-12 12:42:48.514414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.695 [2024-07-12 12:42:48.514437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.695 [2024-07-12 12:42:48.519306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.695 [2024-07-12 12:42:48.519376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.695 [2024-07-12 12:42:48.519413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.695 [2024-07-12 12:42:48.524321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.695 [2024-07-12 12:42:48.524389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.695 [2024-07-12 12:42:48.524425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.695 [2024-07-12 12:42:48.529338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.695 [2024-07-12 12:42:48.529416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.695 [2024-07-12 12:42:48.529440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.695 [2024-07-12 12:42:48.534355] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.695 [2024-07-12 12:42:48.534442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.695 [2024-07-12 12:42:48.534464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.695 [2024-07-12 12:42:48.539327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.539395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.539432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.544360] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.544449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.544471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.549347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.549431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.549454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.554305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.554373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.554395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.559290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.559359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.559381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.564293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.564360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.564381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.569254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.569323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.569345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.574260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.574327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.574349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.579218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.579290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.579312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.584232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.584305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.584327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.589182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.589249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.589271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.594191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.594258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.594281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.599175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.599246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.599268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.604219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.604288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.604310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.609194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.609262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.609284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.614238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.614308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.614329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.619249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.619323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.619345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.624251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.624318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.624340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.629203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.629272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.629294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.634188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.634258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.634280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.639190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.639255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.639278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.644234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.644303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.644325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.649225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.649294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.649316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.654219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.654289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.654311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.659196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.659265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.659286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.664166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.664233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.664255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.669148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.669220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.669242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.674140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.674207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.674228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.679132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.679205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.696 [2024-07-12 12:42:48.679227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.696 [2024-07-12 12:42:48.684111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.696 [2024-07-12 12:42:48.684181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.684204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.697 [2024-07-12 12:42:48.689108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.697 [2024-07-12 12:42:48.689181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.689203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.697 [2024-07-12 12:42:48.694098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.697 [2024-07-12 12:42:48.694171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.694193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.697 [2024-07-12 12:42:48.699050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.697 [2024-07-12 12:42:48.699118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.699140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.697 [2024-07-12 12:42:48.704024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.697 [2024-07-12 12:42:48.704093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.704115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.697 [2024-07-12 12:42:48.708975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.697 [2024-07-12 12:42:48.709044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.709067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.697 [2024-07-12 12:42:48.714007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.697 [2024-07-12 12:42:48.714080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.714102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.697 [2024-07-12 12:42:48.719024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.697 [2024-07-12 12:42:48.719098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.719120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.697 [2024-07-12 12:42:48.724013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.697 [2024-07-12 12:42:48.724080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.724102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.697 [2024-07-12 12:42:48.729022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.697 [2024-07-12 12:42:48.729094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.729117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.697 [2024-07-12 12:42:48.734066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.697 [2024-07-12 12:42:48.734140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.734162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.697 [2024-07-12 12:42:48.739077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.697 [2024-07-12 12:42:48.739150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.739172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.697 [2024-07-12 12:42:48.744066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.697 [2024-07-12 12:42:48.744133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.744155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.697 [2024-07-12 12:42:48.749056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.697 [2024-07-12 12:42:48.749125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.749148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.697 [2024-07-12 12:42:48.754087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.697 [2024-07-12 12:42:48.754155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.754177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.697 [2024-07-12 12:42:48.759073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.697 [2024-07-12 12:42:48.759144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.759166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.697 [2024-07-12 12:42:48.764041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.697 [2024-07-12 12:42:48.764109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.697 [2024-07-12 12:42:48.764131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.957 [2024-07-12 12:42:48.769026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.957 [2024-07-12 12:42:48.769094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.957 [2024-07-12 12:42:48.769116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.957 [2024-07-12 12:42:48.774017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.957 [2024-07-12 12:42:48.774086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.957 [2024-07-12 12:42:48.774109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.957 [2024-07-12 12:42:48.779009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.957 [2024-07-12 12:42:48.779076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.957 [2024-07-12 12:42:48.779098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.957 [2024-07-12 12:42:48.783943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.957 [2024-07-12 12:42:48.784010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.957 [2024-07-12 12:42:48.784033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.957 [2024-07-12 12:42:48.788903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.957 [2024-07-12 12:42:48.788975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.957 [2024-07-12 12:42:48.788997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.957 [2024-07-12 12:42:48.793824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.957 [2024-07-12 12:42:48.793891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.957 [2024-07-12 12:42:48.793913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.957 [2024-07-12 12:42:48.798822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.957 [2024-07-12 12:42:48.798894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.957 [2024-07-12 12:42:48.798916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.957 [2024-07-12 12:42:48.803822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.957 [2024-07-12 12:42:48.803893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.957 [2024-07-12 12:42:48.803915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.957 [2024-07-12 12:42:48.808814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.957 [2024-07-12 12:42:48.808881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.957 [2024-07-12 12:42:48.808903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.957 [2024-07-12 12:42:48.813795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.957 [2024-07-12 12:42:48.813860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.957 [2024-07-12 12:42:48.813882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.957 [2024-07-12 12:42:48.818796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.957 [2024-07-12 12:42:48.818866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.957 [2024-07-12 12:42:48.818888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.957 [2024-07-12 12:42:48.823783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.957 [2024-07-12 12:42:48.823855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.957 [2024-07-12 12:42:48.823878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.957 [2024-07-12 12:42:48.828815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.957 [2024-07-12 12:42:48.828881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.957 [2024-07-12 12:42:48.828904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.957 [2024-07-12 12:42:48.833782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.957 [2024-07-12 12:42:48.833848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.957 [2024-07-12 12:42:48.833871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.957 [2024-07-12 12:42:48.838777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.957 [2024-07-12 12:42:48.838844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.957 [2024-07-12 12:42:48.838866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.957 [2024-07-12 12:42:48.843805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.957 [2024-07-12 12:42:48.843874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.843896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.848822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.848895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.848917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.853746] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.853812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.853834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.858736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.858804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.858826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.863742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.863810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.863832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.868776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.868848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.868871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.873718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.873785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.873807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.878692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.878758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.878780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.883674] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.883746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.883768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.888635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.888701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.888722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.893558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.893626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.893648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.898491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.898555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.898577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.903433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.903507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.903529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.908415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.908481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.908504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.913367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.913455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.913477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.918393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.918474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.918495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.923363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.923457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.923479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.928324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.928391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.928427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.933283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.933350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.933373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.938307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.938379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.938414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.943238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.943306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.943328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.948270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.948337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.948359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.953487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.953675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.953697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.958611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.958684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.958706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.963530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.963599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.963621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.968504] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.968573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.968595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.973448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.973514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.973536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.978446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.978513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.978535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.983395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.958 [2024-07-12 12:42:48.983484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.958 [2024-07-12 12:42:48.983506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.958 [2024-07-12 12:42:48.988364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.959 [2024-07-12 12:42:48.988449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.959 [2024-07-12 12:42:48.988471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.959 [2024-07-12 12:42:48.993312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.959 [2024-07-12 12:42:48.993383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.959 [2024-07-12 12:42:48.993417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.959 [2024-07-12 12:42:48.998285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.959 [2024-07-12 12:42:48.998354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.959 [2024-07-12 12:42:48.998377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.959 [2024-07-12 12:42:49.003261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.959 [2024-07-12 12:42:49.003332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.959 [2024-07-12 12:42:49.003354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.959 [2024-07-12 12:42:49.008217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.959 [2024-07-12 12:42:49.008287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.959 [2024-07-12 12:42:49.008310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:22.959 [2024-07-12 12:42:49.013232] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.959 [2024-07-12 12:42:49.013300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.959 [2024-07-12 12:42:49.013322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:22.959 [2024-07-12 12:42:49.018238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.959 [2024-07-12 12:42:49.018305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.959 [2024-07-12 12:42:49.018327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:22.959 [2024-07-12 12:42:49.023257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.959 [2024-07-12 12:42:49.023324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.959 [2024-07-12 12:42:49.023347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:22.959 [2024-07-12 12:42:49.028238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:22.959 [2024-07-12 12:42:49.028304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:22.959 [2024-07-12 12:42:49.028326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:23.217 [2024-07-12 12:42:49.033221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:23.217 [2024-07-12 12:42:49.033289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.217 [2024-07-12 12:42:49.033311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:23.217 [2024-07-12 12:42:49.038237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ba5500) with pdu=0x2000190fef90 00:18:23.217 [2024-07-12 12:42:49.038303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:23.217 [2024-07-12 12:42:49.038327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:23.217 00:18:23.217 Latency(us) 00:18:23.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.217 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:23.217 nvme0n1 : 2.00 6135.09 766.89 0.00 0.00 2602.02 1921.40 10783.65 00:18:23.217 =================================================================================================================== 00:18:23.217 Total : 6135.09 766.89 0.00 0.00 2602.02 1921.40 10783.65 00:18:23.217 0 00:18:23.217 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:23.217 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:23.217 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:23.217 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:23.217 | .driver_specific 00:18:23.217 | .nvme_error 00:18:23.217 | .status_code 00:18:23.217 | .command_transient_transport_error' 00:18:23.475 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 396 > 0 )) 00:18:23.475 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80944 00:18:23.475 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80944 ']' 00:18:23.475 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80944 00:18:23.475 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:23.475 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:23.475 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80944 00:18:23.475 killing process with pid 80944 00:18:23.475 Received shutdown signal, test time was about 2.000000 seconds 00:18:23.475 00:18:23.475 Latency(us) 00:18:23.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.475 =================================================================================================================== 00:18:23.475 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:23.475 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:23.475 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:23.475 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80944' 00:18:23.475 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80944 00:18:23.475 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80944 00:18:23.732 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 80736 00:18:23.732 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 80736 ']' 00:18:23.732 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 80736 00:18:23.732 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:18:23.732 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:23.732 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 80736 00:18:23.732 killing process with pid 80736 00:18:23.732 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:23.732 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:23.732 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 80736' 00:18:23.732 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 80736 00:18:23.732 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 80736 00:18:23.990 ************************************ 00:18:23.990 END TEST nvmf_digest_error 00:18:23.990 ************************************ 00:18:23.990 00:18:23.990 real 0m18.656s 00:18:23.990 user 0m36.183s 00:18:23.990 sys 0m4.760s 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:23.990 rmmod nvme_tcp 00:18:23.990 rmmod nvme_fabrics 00:18:23.990 rmmod nvme_keyring 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 80736 ']' 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 80736 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 80736 ']' 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 80736 00:18:23.990 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (80736) - No such process 00:18:23.990 Process with pid 80736 is not found 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 80736 is not found' 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:23.990 12:42:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.991 12:42:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:23.991 12:42:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.991 12:42:50 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:23.991 ************************************ 00:18:23.991 END TEST nvmf_digest 00:18:23.991 ************************************ 00:18:23.991 00:18:23.991 real 0m38.350s 00:18:23.991 user 1m13.217s 00:18:23.991 sys 0m9.770s 00:18:23.991 12:42:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:23.991 12:42:50 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:18:24.248 12:42:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:24.248 12:42:50 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:18:24.248 12:42:50 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 1 -eq 1 ]] 00:18:24.248 12:42:50 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:24.248 12:42:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:24.248 12:42:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:24.248 12:42:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:24.248 ************************************ 00:18:24.248 START TEST nvmf_host_multipath 00:18:24.248 ************************************ 00:18:24.248 12:42:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:24.248 * Looking for test storage... 00:18:24.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:24.248 12:42:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:24.248 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:24.248 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.248 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.248 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.248 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.248 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.248 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.248 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:24.249 Cannot find device "nvmf_tgt_br" 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.249 Cannot find device "nvmf_tgt_br2" 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:24.249 Cannot find device "nvmf_tgt_br" 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:24.249 Cannot find device "nvmf_tgt_br2" 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:24.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:24.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:24.249 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:24.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:18:24.508 00:18:24.508 --- 10.0.0.2 ping statistics --- 00:18:24.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.508 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:24.508 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:24.508 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:18:24.508 00:18:24.508 --- 10.0.0.3 ping statistics --- 00:18:24.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.508 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:24.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:18:24.508 00:18:24.508 --- 10.0.0.1 ping statistics --- 00:18:24.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.508 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:24.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=81208 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 81208 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81208 ']' 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.508 12:42:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:24.766 [2024-07-12 12:42:50.586913] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:18:24.766 [2024-07-12 12:42:50.587861] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.766 [2024-07-12 12:42:50.723285] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:25.076 [2024-07-12 12:42:50.849538] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.076 [2024-07-12 12:42:50.849864] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.076 [2024-07-12 12:42:50.850031] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.076 [2024-07-12 12:42:50.850087] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.076 [2024-07-12 12:42:50.850190] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.076 [2024-07-12 12:42:50.850361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.076 [2024-07-12 12:42:50.850369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.076 [2024-07-12 12:42:50.914727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:18:25.644 12:42:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.644 12:42:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:25.644 12:42:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:25.644 12:42:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:25.644 12:42:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:25.644 12:42:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.644 12:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=81208 00:18:25.644 12:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:25.901 [2024-07-12 12:42:51.920553] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.901 12:42:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:26.465 Malloc0 00:18:26.465 12:42:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:26.465 12:42:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:26.722 12:42:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:26.980 [2024-07-12 12:42:52.949053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:26.980 12:42:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:27.237 [2024-07-12 12:42:53.237316] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:27.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.237 12:42:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=81259 00:18:27.237 12:42:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:27.237 12:42:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.237 12:42:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 81259 /var/tmp/bdevperf.sock 00:18:27.237 12:42:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@829 -- # '[' -z 81259 ']' 00:18:27.237 12:42:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.237 12:42:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.237 12:42:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.237 12:42:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.237 12:42:53 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:28.609 12:42:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.609 12:42:54 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@862 -- # return 0 00:18:28.609 12:42:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:28.609 12:42:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:28.867 Nvme0n1 00:18:28.867 12:42:54 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:29.124 Nvme0n1 00:18:29.124 12:42:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:18:29.124 12:42:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:30.497 12:42:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:30.497 12:42:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:30.497 12:42:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:30.754 12:42:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:30.754 12:42:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81208 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:30.754 12:42:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81310 00:18:30.754 12:42:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:37.323 12:43:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:37.323 12:43:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:37.323 12:43:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:37.323 12:43:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:37.323 Attaching 4 probes... 00:18:37.323 @path[10.0.0.2, 4421]: 17185 00:18:37.323 @path[10.0.0.2, 4421]: 17802 00:18:37.323 @path[10.0.0.2, 4421]: 17682 00:18:37.323 @path[10.0.0.2, 4421]: 17774 00:18:37.323 @path[10.0.0.2, 4421]: 17774 00:18:37.323 12:43:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:37.324 12:43:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:37.324 12:43:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:37.324 12:43:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:37.324 12:43:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:37.324 12:43:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:37.324 12:43:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81310 00:18:37.324 12:43:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:37.324 12:43:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:37.324 12:43:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:37.324 12:43:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:37.581 12:43:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:37.581 12:43:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81208 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:37.581 12:43:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81417 00:18:37.581 12:43:03 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:44.177 12:43:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:44.177 12:43:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:44.177 12:43:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:44.177 12:43:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:44.177 Attaching 4 probes... 00:18:44.177 @path[10.0.0.2, 4420]: 17518 00:18:44.177 @path[10.0.0.2, 4420]: 17535 00:18:44.177 @path[10.0.0.2, 4420]: 17756 00:18:44.177 @path[10.0.0.2, 4420]: 17771 00:18:44.177 @path[10.0.0.2, 4420]: 18100 00:18:44.177 12:43:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:44.177 12:43:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:44.177 12:43:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:44.177 12:43:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:44.177 12:43:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:44.177 12:43:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:44.177 12:43:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81417 00:18:44.177 12:43:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:44.177 12:43:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:44.177 12:43:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:44.177 12:43:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:44.434 12:43:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:44.434 12:43:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81535 00:18:44.434 12:43:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:44.434 12:43:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81208 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:51.038 12:43:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:51.038 12:43:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:51.038 12:43:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:51.038 12:43:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:51.038 Attaching 4 probes... 00:18:51.038 @path[10.0.0.2, 4421]: 15218 00:18:51.038 @path[10.0.0.2, 4421]: 17637 00:18:51.038 @path[10.0.0.2, 4421]: 17632 00:18:51.038 @path[10.0.0.2, 4421]: 17664 00:18:51.038 @path[10.0.0.2, 4421]: 17720 00:18:51.038 12:43:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:51.038 12:43:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:51.038 12:43:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:51.038 12:43:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:51.038 12:43:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:51.038 12:43:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:51.038 12:43:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81535 00:18:51.038 12:43:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:51.038 12:43:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:51.038 12:43:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:51.038 12:43:16 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:51.297 12:43:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:51.297 12:43:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81208 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:51.297 12:43:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81652 00:18:51.297 12:43:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:57.854 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:57.854 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:57.854 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:57.854 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:57.854 Attaching 4 probes... 00:18:57.854 00:18:57.854 00:18:57.854 00:18:57.854 00:18:57.854 00:18:57.854 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:57.855 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:57.855 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:57.855 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:57.855 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:57.855 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:57.855 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81652 00:18:57.855 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:57.855 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:57.855 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:57.855 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:58.112 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:58.112 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81760 00:18:58.112 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81208 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:58.112 12:43:23 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:04.720 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:04.720 12:43:29 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:04.720 12:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:04.720 12:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:04.720 Attaching 4 probes... 00:19:04.720 @path[10.0.0.2, 4421]: 17244 00:19:04.720 @path[10.0.0.2, 4421]: 17520 00:19:04.720 @path[10.0.0.2, 4421]: 17480 00:19:04.720 @path[10.0.0.2, 4421]: 17422 00:19:04.720 @path[10.0.0.2, 4421]: 17320 00:19:04.720 12:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:04.720 12:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:04.720 12:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:04.720 12:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:04.720 12:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:04.720 12:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:04.720 12:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81760 00:19:04.720 12:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:04.720 12:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:04.720 [2024-07-12 12:43:30.465974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bf70 is same with the state(5) to be set 00:19:04.720 [2024-07-12 12:43:30.466072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bf70 is same with the state(5) to be set 00:19:04.720 [2024-07-12 12:43:30.466093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8bf70 is same with the state(5) to be set 00:19:04.720 12:43:30 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:19:05.654 12:43:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:19:05.654 12:43:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81884 00:19:05.654 12:43:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81208 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:05.654 12:43:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:12.209 12:43:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:19:12.209 12:43:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:12.209 12:43:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:19:12.209 12:43:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:12.209 Attaching 4 probes... 00:19:12.209 @path[10.0.0.2, 4420]: 16907 00:19:12.209 @path[10.0.0.2, 4420]: 17181 00:19:12.209 @path[10.0.0.2, 4420]: 17203 00:19:12.209 @path[10.0.0.2, 4420]: 17052 00:19:12.209 @path[10.0.0.2, 4420]: 17104 00:19:12.209 12:43:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:12.209 12:43:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:12.209 12:43:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:12.209 12:43:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:19:12.209 12:43:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:19:12.209 12:43:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:19:12.209 12:43:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81884 00:19:12.209 12:43:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:12.209 12:43:37 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:12.209 [2024-07-12 12:43:38.019598] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:12.209 12:43:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:12.466 12:43:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:19:19.020 12:43:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:19.020 12:43:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=82058 00:19:19.020 12:43:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 81208 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:19.020 12:43:44 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:19:24.282 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:24.282 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:24.540 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:19:24.540 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:24.540 Attaching 4 probes... 00:19:24.540 @path[10.0.0.2, 4421]: 17115 00:19:24.540 @path[10.0.0.2, 4421]: 17456 00:19:24.540 @path[10.0.0.2, 4421]: 17449 00:19:24.540 @path[10.0.0.2, 4421]: 17557 00:19:24.540 @path[10.0.0.2, 4421]: 17363 00:19:24.540 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:24.540 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:24.540 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:19:24.540 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:19:24.540 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:24.540 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:24.540 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 82058 00:19:24.540 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:24.540 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 81259 00:19:24.540 12:43:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81259 ']' 00:19:24.540 12:43:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81259 00:19:24.540 12:43:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:19:24.540 12:43:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:24.540 12:43:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81259 00:19:24.799 killing process with pid 81259 00:19:24.799 12:43:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:24.799 12:43:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:24.799 12:43:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81259' 00:19:24.799 12:43:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81259 00:19:24.799 12:43:50 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81259 00:19:24.799 Connection closed with partial response: 00:19:24.799 00:19:24.799 00:19:24.799 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 81259 00:19:24.799 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:25.065 [2024-07-12 12:42:53.321284] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:19:25.065 [2024-07-12 12:42:53.321449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81259 ] 00:19:25.065 [2024-07-12 12:42:53.463318] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.065 [2024-07-12 12:42:53.592786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.065 [2024-07-12 12:42:53.645994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:25.065 Running I/O for 90 seconds... 00:19:25.065 [2024-07-12 12:43:03.483113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.065 [2024-07-12 12:43:03.483201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:25.065 [2024-07-12 12:43:03.483265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.065 [2024-07-12 12:43:03.483287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:25.065 [2024-07-12 12:43:03.483311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.065 [2024-07-12 12:43:03.483326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:25.065 [2024-07-12 12:43:03.483349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.065 [2024-07-12 12:43:03.483364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.483385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.483413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.483439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.483454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.483487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.483511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.483532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.483547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.483568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.483583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.483604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.483619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.483640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.483693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.483717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.483732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.483754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.483769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.483791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.483806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.483827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.483842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.483863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.483879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.483900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.483914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.483935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.483950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.483971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.483986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.484022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.484057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.484093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.484138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.484176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.484238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.484275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.484311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.484346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.484382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.484456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.484493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.484531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.484567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.484604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.484640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.484690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.484727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.484763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.066 [2024-07-12 12:43:03.484800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.484836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.484871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.484907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.066 [2024-07-12 12:43:03.484943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:25.066 [2024-07-12 12:43:03.484964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.484979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.067 [2024-07-12 12:43:03.485438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.067 [2024-07-12 12:43:03.485480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.067 [2024-07-12 12:43:03.485516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.067 [2024-07-12 12:43:03.485552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.067 [2024-07-12 12:43:03.485600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.067 [2024-07-12 12:43:03.485637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.067 [2024-07-12 12:43:03.485673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.067 [2024-07-12 12:43:03.485709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.067 [2024-07-12 12:43:03.485746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.485964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.485985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.486000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.486033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.486056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.486079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.486095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.486116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.486131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.486152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.486167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.486189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.486203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.486225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.486239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.486260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.486275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.486296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.486311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.486332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.486348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.486369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.486384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.486417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.486434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.486456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.486472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.486493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.486515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.067 [2024-07-12 12:43:03.486538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.067 [2024-07-12 12:43:03.486554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.486576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.486591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.486612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.486626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.486655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.486670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.486696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.068 [2024-07-12 12:43:03.486712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.486733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.068 [2024-07-12 12:43:03.486748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.486769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.068 [2024-07-12 12:43:03.486784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.486805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.068 [2024-07-12 12:43:03.486819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.486841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.068 [2024-07-12 12:43:03.486855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.486876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.068 [2024-07-12 12:43:03.486891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.486911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.068 [2024-07-12 12:43:03.486926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.486947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.068 [2024-07-12 12:43:03.486973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.068 [2024-07-12 12:43:03.487648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.068 [2024-07-12 12:43:03.487685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.068 [2024-07-12 12:43:03.487720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.068 [2024-07-12 12:43:03.487756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.068 [2024-07-12 12:43:03.487792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.068 [2024-07-12 12:43:03.487828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.068 [2024-07-12 12:43:03.487863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.068 [2024-07-12 12:43:03.487915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.487982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.487998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.488020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.488035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.488056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.488071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.488094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.488109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.488131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.068 [2024-07-12 12:43:03.488145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:25.068 [2024-07-12 12:43:03.488167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.069 [2024-07-12 12:43:03.488182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:03.489380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.069 [2024-07-12 12:43:03.489432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.031854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.031937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.069 [2024-07-12 12:43:10.032503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.069 [2024-07-12 12:43:10.032538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.069 [2024-07-12 12:43:10.032573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.069 [2024-07-12 12:43:10.032608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.069 [2024-07-12 12:43:10.032642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.069 [2024-07-12 12:43:10.032696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.069 [2024-07-12 12:43:10.032732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.069 [2024-07-12 12:43:10.032782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.032981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.032995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.033015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.033029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.033049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.033063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.033083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.033098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.033126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.033142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.033162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.033177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.033197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.033211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.033231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.033245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.033266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.033281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.033301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.033316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.033336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.033350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.033371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.033384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.033404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.069 [2024-07-12 12:43:10.033448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:25.069 [2024-07-12 12:43:10.033471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.033486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.033507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.033522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.033543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.033558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.033587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.033604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.033625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.033639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.033660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.033675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.033696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.033711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.033731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.033761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.033781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.033795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.033816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.033830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.033865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.033884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.033906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.033921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.033941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.033956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.033976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.033990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.034025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.034067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.034104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.034139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.034178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.034213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.034247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.034282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.034316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.034351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.034385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.070 [2024-07-12 12:43:10.034449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.034499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.034543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.034585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.034621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.034657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.034692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.034728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.034763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.034814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.070 [2024-07-12 12:43:10.034848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:25.070 [2024-07-12 12:43:10.034868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.034882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.034903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.034918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.034938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.034953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.034973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.034987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.035134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.035170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.035225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.035261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.035296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.035346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.035382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.035417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.035481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.035518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.035553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.035589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.035639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.035685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.035721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.035756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.035791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.035835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.035886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.035921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.035955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.035975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.035989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.036009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.036023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.036043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.036057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.036084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.036100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.036121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.036135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.036155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.036170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.036190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.036204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.036224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.036239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.036259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.036273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.036293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.036308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.036328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.036343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.037066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.071 [2024-07-12 12:43:10.037093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.037128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.037153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.037184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.037199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.037228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.037243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.037272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.037299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.037330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.071 [2024-07-12 12:43:10.037346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:25.071 [2024-07-12 12:43:10.037376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:10.037391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:10.037434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:10.037453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:10.037498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:10.037517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:10.037548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:10.037563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:10.037592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:10.037607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:10.037637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:10.037651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:10.037680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:10.037696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:10.037725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:10.037740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:10.037769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:10.037784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:10.037814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:10.037829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:10.037859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:10.037883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:10.037915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:10.037933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:17.175282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:17.175362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:17.175413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:17.175453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:17.175502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:17.175538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:17.175573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:17.175609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:17.175647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:17.175683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:17.175718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:17.175791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:17.175827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:17.175861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:17.175896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:17.175931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:17.175965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.175987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:17.176003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.176024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:17.176038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.176059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:17.176073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.176094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:17.176108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.176129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:17.176143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.176164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:17.176179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.176211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.072 [2024-07-12 12:43:17.176228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.176328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:17.176349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.176372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:17.176387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.176421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:17.176439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.176460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:17.176475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.176496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:17.176511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:25.072 [2024-07-12 12:43:17.176531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.072 [2024-07-12 12:43:17.176546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.176567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.176581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.176602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.176616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.176637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.176651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.176673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.176688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.176709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.176724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.176744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.176769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.176792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.176808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.176829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.176843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.176864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.176879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.176900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.176914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.176936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.073 [2024-07-12 12:43:17.176951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.176972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.073 [2024-07-12 12:43:17.176987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.073 [2024-07-12 12:43:17.177022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.073 [2024-07-12 12:43:17.177057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.073 [2024-07-12 12:43:17.177093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.073 [2024-07-12 12:43:17.177128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.073 [2024-07-12 12:43:17.177163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.073 [2024-07-12 12:43:17.177205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.073 [2024-07-12 12:43:17.177242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.073 [2024-07-12 12:43:17.177279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.073 [2024-07-12 12:43:17.177315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.073 [2024-07-12 12:43:17.177350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.073 [2024-07-12 12:43:17.177385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.073 [2024-07-12 12:43:17.177437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.073 [2024-07-12 12:43:17.177473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.073 [2024-07-12 12:43:17.177509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.177579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.177616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.177652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.177698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.177738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.177774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.177810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.177845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.177881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.177918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.177954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.177975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.177989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.178010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.178025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.178045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.178060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.178081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.178095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:25.073 [2024-07-12 12:43:17.178117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.073 [2024-07-12 12:43:17.178132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.074 [2024-07-12 12:43:17.178177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.074 [2024-07-12 12:43:17.178214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.074 [2024-07-12 12:43:17.178249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.074 [2024-07-12 12:43:17.178285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.074 [2024-07-12 12:43:17.178320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.074 [2024-07-12 12:43:17.178356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.074 [2024-07-12 12:43:17.178391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.074 [2024-07-12 12:43:17.178443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.178479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.178516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.178551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.178587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.178633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.178670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.178706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.178741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.178777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.178813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.178849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.178884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.178920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.178955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.178977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.178991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.179012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.179026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.179047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.179069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.179092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.179108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.179129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.179144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.179165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.074 [2024-07-12 12:43:17.179180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.179211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.074 [2024-07-12 12:43:17.179226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.179247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.074 [2024-07-12 12:43:17.179261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.179282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.074 [2024-07-12 12:43:17.179297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.179318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.074 [2024-07-12 12:43:17.179332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.179353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.074 [2024-07-12 12:43:17.179368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.179389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.074 [2024-07-12 12:43:17.179415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.179439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.074 [2024-07-12 12:43:17.179454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:25.074 [2024-07-12 12:43:17.180206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.074 [2024-07-12 12:43:17.180233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.180267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.180295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.180328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.180344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.180373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.180389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.180435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.180461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.180491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.180506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.180535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.180550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.180579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.180594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.180640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.180659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.180690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.180705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.180734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.180749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.180778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.180793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.180822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.180837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.180866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.180880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.180923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.180940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.180970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.180985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.181017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.181033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.181062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.181078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:17.181108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:17.181122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:30.466208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:30.466290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:30.466328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:30.466364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:30.466414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:30.466455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:30.466491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.075 [2024-07-12 12:43:30.466558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.075 [2024-07-12 12:43:30.466593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.075 [2024-07-12 12:43:30.466628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.075 [2024-07-12 12:43:30.466663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.075 [2024-07-12 12:43:30.466698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.075 [2024-07-12 12:43:30.466734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.075 [2024-07-12 12:43:30.466768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.075 [2024-07-12 12:43:30.466804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.075 [2024-07-12 12:43:30.466840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.075 [2024-07-12 12:43:30.466878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.075 [2024-07-12 12:43:30.466915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.075 [2024-07-12 12:43:30.466951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.466972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.075 [2024-07-12 12:43:30.466997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.467020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.075 [2024-07-12 12:43:30.467035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.467056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.075 [2024-07-12 12:43:30.467071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.467092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.075 [2024-07-12 12:43:30.467106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:25.075 [2024-07-12 12:43:30.467128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.075 [2024-07-12 12:43:30.467142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.467216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.467247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.467276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.467305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.467334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.467363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.467392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.467440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.467494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.467529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.467559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.467588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.467617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.467647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.467675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.467704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.467733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.467763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.467792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.467822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.467859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.467889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.467919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.467947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.467976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.467992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.468006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.468035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.468064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.468092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.468121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.468150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.468179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.468207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.468243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.468272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.076 [2024-07-12 12:43:30.468302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.468331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.468359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.468388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.468429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.468459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.468488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.076 [2024-07-12 12:43:30.468517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.076 [2024-07-12 12:43:30.468532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.468545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.468563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.468576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.468592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.468612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.468629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.468643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.468658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.468672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.468687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.468700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.468715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.468729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.468744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.468758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.468774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.468787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.468803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.077 [2024-07-12 12:43:30.468816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.468832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.077 [2024-07-12 12:43:30.468845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.468860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.077 [2024-07-12 12:43:30.468874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.468889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.077 [2024-07-12 12:43:30.468903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.468918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.077 [2024-07-12 12:43:30.468932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.468948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.077 [2024-07-12 12:43:30.468962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.468977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.077 [2024-07-12 12:43:30.468996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.077 [2024-07-12 12:43:30.469026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.469054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.469083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.469112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.469140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.469168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.469197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.469236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.469265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.469301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.469330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.469359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.469394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.469443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.469472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.469500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.077 [2024-07-12 12:43:30.469529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.077 [2024-07-12 12:43:30.469558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.077 [2024-07-12 12:43:30.469587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.077 [2024-07-12 12:43:30.469616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.077 [2024-07-12 12:43:30.469645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.077 [2024-07-12 12:43:30.469673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.077 [2024-07-12 12:43:30.469701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.077 [2024-07-12 12:43:30.469737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe756d0 is same with the state(5) to be set 00:19:25.077 [2024-07-12 12:43:30.469775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.077 [2024-07-12 12:43:30.469786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.077 [2024-07-12 12:43:30.469797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66384 len:8 PRP1 0x0 PRP2 0x0 00:19:25.077 [2024-07-12 12:43:30.469811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.077 [2024-07-12 12:43:30.469825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.077 [2024-07-12 12:43:30.469835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.077 [2024-07-12 12:43:30.469845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66840 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.469858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.469871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.469881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.469892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66848 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.469906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.469919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.469929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.469939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66856 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.469952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.469966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.469976] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.469986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66864 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.469999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.470022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.470032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66872 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.470045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.470068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.470078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66880 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.470091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.470115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.470125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66888 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.470143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.470168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.470178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66896 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.470191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.470214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.470225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66904 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.470238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.470261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.470272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66912 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.470285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.470309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.470319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66920 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.470332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.470355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.470365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66928 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.470378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.470413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.470425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66936 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.470439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.470463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.470473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66944 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.470487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.470518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.470534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66952 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.470548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.470572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.470583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66960 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.470596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.470619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.470629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66968 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.470642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.470665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.470676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66976 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.470689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.470712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.470722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66984 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.470734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.078 [2024-07-12 12:43:30.470757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.078 [2024-07-12 12:43:30.470767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66992 len:8 PRP1 0x0 PRP2 0x0 00:19:25.078 [2024-07-12 12:43:30.470780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.470853] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe756d0 was disconnected and freed. reset controller. 00:19:25.078 [2024-07-12 12:43:30.470968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.078 [2024-07-12 12:43:30.470993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.471016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.078 [2024-07-12 12:43:30.471030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.471045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.078 [2024-07-12 12:43:30.471057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.078 [2024-07-12 12:43:30.471081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:25.078 [2024-07-12 12:43:30.471095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.079 [2024-07-12 12:43:30.471110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.079 [2024-07-12 12:43:30.471124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.079 [2024-07-12 12:43:30.471144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdef100 is same with the state(5) to be set 00:19:25.079 [2024-07-12 12:43:30.472348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.079 [2024-07-12 12:43:30.472389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdef100 (9): Bad file descriptor 00:19:25.079 [2024-07-12 12:43:30.472853] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:25.079 [2024-07-12 12:43:30.472886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdef100 with addr=10.0.0.2, port=4421 00:19:25.079 [2024-07-12 12:43:30.472902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdef100 is same with the state(5) to be set 00:19:25.079 [2024-07-12 12:43:30.472973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdef100 (9): Bad file descriptor 00:19:25.079 [2024-07-12 12:43:30.473008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:25.079 [2024-07-12 12:43:30.473024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:25.079 [2024-07-12 12:43:30.473039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.079 [2024-07-12 12:43:30.473071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:25.079 [2024-07-12 12:43:30.473088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.079 [2024-07-12 12:43:40.533984] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:25.079 Received shutdown signal, test time was about 55.357641 seconds 00:19:25.079 00:19:25.079 Latency(us) 00:19:25.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.079 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:25.079 Verification LBA range: start 0x0 length 0x4000 00:19:25.079 Nvme0n1 : 55.36 7439.46 29.06 0.00 0.00 17172.83 202.01 7046430.72 00:19:25.079 =================================================================================================================== 00:19:25.079 Total : 7439.46 29.06 0.00 0.00 17172.83 202.01 7046430.72 00:19:25.079 12:43:50 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:25.337 rmmod nvme_tcp 00:19:25.337 rmmod nvme_fabrics 00:19:25.337 rmmod nvme_keyring 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 81208 ']' 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 81208 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@948 -- # '[' -z 81208 ']' 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # kill -0 81208 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # uname 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 81208 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@966 -- # echo 'killing process with pid 81208' 00:19:25.337 killing process with pid 81208 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@967 -- # kill 81208 00:19:25.337 12:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@972 -- # wait 81208 00:19:25.595 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:25.595 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:25.595 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:25.595 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.595 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:25.595 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.595 12:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.595 12:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.595 12:43:51 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:25.595 00:19:25.595 real 1m1.512s 00:19:25.595 user 2m50.847s 00:19:25.595 sys 0m18.256s 00:19:25.595 12:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:25.595 ************************************ 00:19:25.595 12:43:51 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:19:25.595 END TEST nvmf_host_multipath 00:19:25.595 ************************************ 00:19:25.595 12:43:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:25.595 12:43:51 nvmf_tcp -- nvmf/nvmf.sh@118 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:25.595 12:43:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:25.595 12:43:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:25.595 12:43:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:25.595 ************************************ 00:19:25.595 START TEST nvmf_timeout 00:19:25.595 ************************************ 00:19:25.595 12:43:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:25.854 * Looking for test storage... 00:19:25.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:25.854 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:25.855 Cannot find device "nvmf_tgt_br" 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:25.855 Cannot find device "nvmf_tgt_br2" 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:25.855 Cannot find device "nvmf_tgt_br" 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:25.855 Cannot find device "nvmf_tgt_br2" 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:25.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:25.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:25.855 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:26.114 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:26.114 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:26.114 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:26.114 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:26.114 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:26.114 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:26.114 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:26.114 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:26.114 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:26.114 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:26.114 12:43:51 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:26.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:19:26.114 00:19:26.114 --- 10.0.0.2 ping statistics --- 00:19:26.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.114 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:26.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:26.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:19:26.114 00:19:26.114 --- 10.0.0.3 ping statistics --- 00:19:26.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.114 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:26.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:26.114 00:19:26.114 --- 10.0.0.1 ping statistics --- 00:19:26.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.114 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=82369 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 82369 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82369 ']' 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.114 12:43:52 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:26.114 [2024-07-12 12:43:52.139043] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:19:26.114 [2024-07-12 12:43:52.139136] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.373 [2024-07-12 12:43:52.275781] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:26.373 [2024-07-12 12:43:52.406985] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.373 [2024-07-12 12:43:52.407055] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.373 [2024-07-12 12:43:52.407070] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.373 [2024-07-12 12:43:52.407081] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.373 [2024-07-12 12:43:52.407091] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.373 [2024-07-12 12:43:52.407275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.373 [2024-07-12 12:43:52.408006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.632 [2024-07-12 12:43:52.464695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:27.199 12:43:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:27.199 12:43:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:27.199 12:43:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:27.199 12:43:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:27.199 12:43:53 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:27.199 12:43:53 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.199 12:43:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:27.199 12:43:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:27.456 [2024-07-12 12:43:53.398754] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.456 12:43:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:27.714 Malloc0 00:19:27.714 12:43:53 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:28.280 12:43:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:28.280 12:43:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:28.538 [2024-07-12 12:43:54.541178] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.538 12:43:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=82418 00:19:28.538 12:43:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:28.538 12:43:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 82418 /var/tmp/bdevperf.sock 00:19:28.538 12:43:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82418 ']' 00:19:28.538 12:43:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.538 12:43:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.538 12:43:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.538 12:43:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.538 12:43:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:28.796 [2024-07-12 12:43:54.609077] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:19:28.796 [2024-07-12 12:43:54.609181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82418 ] 00:19:28.796 [2024-07-12 12:43:54.743836] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.796 [2024-07-12 12:43:54.865891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.054 [2024-07-12 12:43:54.926061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:29.619 12:43:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.619 12:43:55 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:29.619 12:43:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:29.876 12:43:55 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:30.134 NVMe0n1 00:19:30.134 12:43:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=82442 00:19:30.134 12:43:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:30.134 12:43:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:19:30.134 Running I/O for 10 seconds... 00:19:31.068 12:43:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:31.328 [2024-07-12 12:43:57.284314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.328 [2024-07-12 12:43:57.284382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.328 [2024-07-12 12:43:57.284643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.328 [2024-07-12 12:43:57.284670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.328 [2024-07-12 12:43:57.284688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.328 [2024-07-12 12:43:57.284699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.328 [2024-07-12 12:43:57.284710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.328 [2024-07-12 12:43:57.284720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.328 [2024-07-12 12:43:57.284732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.328 [2024-07-12 12:43:57.284742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.328 [2024-07-12 12:43:57.284753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.328 [2024-07-12 12:43:57.284763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.328 [2024-07-12 12:43:57.284774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.328 [2024-07-12 12:43:57.284784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.328 [2024-07-12 12:43:57.284795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.328 [2024-07-12 12:43:57.284805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.328 [2024-07-12 12:43:57.284816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.328 [2024-07-12 12:43:57.284825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.328 [2024-07-12 12:43:57.284837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.328 [2024-07-12 12:43:57.284847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.328 [2024-07-12 12:43:57.284858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.328 [2024-07-12 12:43:57.284868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.328 [2024-07-12 12:43:57.284880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.328 [2024-07-12 12:43:57.284889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.328 [2024-07-12 12:43:57.284900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.328 [2024-07-12 12:43:57.284909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.328 [2024-07-12 12:43:57.284920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.328 [2024-07-12 12:43:57.284929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.328 [2024-07-12 12:43:57.284949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.328 [2024-07-12 12:43:57.284961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.328 [2024-07-12 12:43:57.284973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.328 [2024-07-12 12:43:57.284982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.328 [2024-07-12 12:43:57.284994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.329 [2024-07-12 12:43:57.285885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.329 [2024-07-12 12:43:57.285894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.285905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.285915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.285926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.285936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.285947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.285956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.285967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.285978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.285989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.285998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.330 [2024-07-12 12:43:57.286797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.330 [2024-07-12 12:43:57.286808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.331 [2024-07-12 12:43:57.286818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.286829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.331 [2024-07-12 12:43:57.286838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.286849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.331 [2024-07-12 12:43:57.286858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.286869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.331 [2024-07-12 12:43:57.286879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.286890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.331 [2024-07-12 12:43:57.286899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.286909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.331 [2024-07-12 12:43:57.286918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.286930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.331 [2024-07-12 12:43:57.286939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.286950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.331 [2024-07-12 12:43:57.286959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.286969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.331 [2024-07-12 12:43:57.286978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.286990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.331 [2024-07-12 12:43:57.287005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.331 [2024-07-12 12:43:57.287027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.331 [2024-07-12 12:43:57.287048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.331 [2024-07-12 12:43:57.287075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.331 [2024-07-12 12:43:57.287097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.331 [2024-07-12 12:43:57.287118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.331 [2024-07-12 12:43:57.287139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.331 [2024-07-12 12:43:57.287160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.331 [2024-07-12 12:43:57.287180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.331 [2024-07-12 12:43:57.287200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.331 [2024-07-12 12:43:57.287220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.331 [2024-07-12 12:43:57.287241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.331 [2024-07-12 12:43:57.287262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.331 [2024-07-12 12:43:57.287284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.331 [2024-07-12 12:43:57.287303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:31.331 [2024-07-12 12:43:57.287323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:31.331 [2024-07-12 12:43:57.287350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x53f4d0 is same with the state(5) to be set 00:19:31.331 [2024-07-12 12:43:57.287375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:31.331 [2024-07-12 12:43:57.287383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:31.331 [2024-07-12 12:43:57.287391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62616 len:8 PRP1 0x0 PRP2 0x0 00:19:31.331 [2024-07-12 12:43:57.287867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.287980] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x53f4d0 was disconnected and freed. reset controller. 00:19:31.331 [2024-07-12 12:43:57.288387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.331 [2024-07-12 12:43:57.288479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.288638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.331 [2024-07-12 12:43:57.288751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.288817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.331 [2024-07-12 12:43:57.288934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.289026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:31.331 [2024-07-12 12:43:57.289118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:31.331 [2024-07-12 12:43:57.289207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f4d40 is same with the state(5) to be set 00:19:31.331 [2024-07-12 12:43:57.289555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:31.331 [2024-07-12 12:43:57.289711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f4d40 (9): Bad file descriptor 00:19:31.331 [2024-07-12 12:43:57.289941] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:31.331 [2024-07-12 12:43:57.290098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f4d40 with addr=10.0.0.2, port=4420 00:19:31.331 [2024-07-12 12:43:57.290162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f4d40 is same with the state(5) to be set 00:19:31.331 [2024-07-12 12:43:57.290299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f4d40 (9): Bad file descriptor 00:19:31.331 [2024-07-12 12:43:57.290365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:31.331 [2024-07-12 12:43:57.290468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:31.331 [2024-07-12 12:43:57.290582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:31.331 [2024-07-12 12:43:57.290634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:31.331 [2024-07-12 12:43:57.290709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:31.331 12:43:57 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:19:33.229 [2024-07-12 12:43:59.290952] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:33.229 [2024-07-12 12:43:59.291251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f4d40 with addr=10.0.0.2, port=4420 00:19:33.229 [2024-07-12 12:43:59.291278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f4d40 is same with the state(5) to be set 00:19:33.229 [2024-07-12 12:43:59.291316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f4d40 (9): Bad file descriptor 00:19:33.229 [2024-07-12 12:43:59.291337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:33.229 [2024-07-12 12:43:59.291352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:33.229 [2024-07-12 12:43:59.291364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:33.229 [2024-07-12 12:43:59.291392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:33.229 [2024-07-12 12:43:59.291422] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:33.486 12:43:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:19:33.486 12:43:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:33.486 12:43:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:33.744 12:43:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:33.744 12:43:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:19:33.744 12:43:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:33.744 12:43:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:34.001 12:43:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:34.001 12:43:59 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:19:35.372 [2024-07-12 12:44:01.291584] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.372 [2024-07-12 12:44:01.291648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4f4d40 with addr=10.0.0.2, port=4420 00:19:35.372 [2024-07-12 12:44:01.291665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4f4d40 is same with the state(5) to be set 00:19:35.372 [2024-07-12 12:44:01.291693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4f4d40 (9): Bad file descriptor 00:19:35.372 [2024-07-12 12:44:01.291713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:35.372 [2024-07-12 12:44:01.291724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:35.372 [2024-07-12 12:44:01.291735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.372 [2024-07-12 12:44:01.291764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.372 [2024-07-12 12:44:01.291776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:37.269 [2024-07-12 12:44:03.291818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:37.269 [2024-07-12 12:44:03.291908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:37.269 [2024-07-12 12:44:03.291921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:37.269 [2024-07-12 12:44:03.291933] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:37.269 [2024-07-12 12:44:03.291962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.638 00:19:38.638 Latency(us) 00:19:38.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.638 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:38.638 Verification LBA range: start 0x0 length 0x4000 00:19:38.638 NVMe0n1 : 8.11 949.84 3.71 15.79 0.00 132347.02 3991.74 7015926.69 00:19:38.638 =================================================================================================================== 00:19:38.638 Total : 949.84 3.71 15.79 0.00 132347.02 3991.74 7015926.69 00:19:38.638 0 00:19:38.895 12:44:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:19:38.895 12:44:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:38.895 12:44:04 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:39.250 12:44:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:39.250 12:44:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:19:39.250 12:44:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:39.250 12:44:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:39.508 12:44:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:39.508 12:44:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 82442 00:19:39.508 12:44:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 82418 00:19:39.508 12:44:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82418 ']' 00:19:39.508 12:44:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82418 00:19:39.508 12:44:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:19:39.508 12:44:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:39.508 12:44:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82418 00:19:39.508 killing process with pid 82418 00:19:39.508 Received shutdown signal, test time was about 9.309216 seconds 00:19:39.508 00:19:39.508 Latency(us) 00:19:39.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.508 =================================================================================================================== 00:19:39.508 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:39.508 12:44:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:39.508 12:44:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:39.508 12:44:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82418' 00:19:39.508 12:44:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82418 00:19:39.508 12:44:05 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82418 00:19:39.767 12:44:05 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:40.025 [2024-07-12 12:44:06.010343] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.025 12:44:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=82564 00:19:40.025 12:44:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:40.025 12:44:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 82564 /var/tmp/bdevperf.sock 00:19:40.025 12:44:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82564 ']' 00:19:40.025 12:44:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.025 12:44:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:40.025 12:44:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.025 12:44:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:40.025 12:44:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:40.025 [2024-07-12 12:44:06.083962] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:19:40.025 [2024-07-12 12:44:06.084067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82564 ] 00:19:40.283 [2024-07-12 12:44:06.226560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.283 [2024-07-12 12:44:06.329878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:40.540 [2024-07-12 12:44:06.382295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:19:41.104 12:44:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.104 12:44:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:19:41.104 12:44:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:41.361 12:44:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:41.664 NVMe0n1 00:19:41.664 12:44:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=82587 00:19:41.664 12:44:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:41.664 12:44:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:19:41.924 Running I/O for 10 seconds... 00:19:42.858 12:44:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:43.124 [2024-07-12 12:44:08.974498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.124 [2024-07-12 12:44:08.974551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.124 [2024-07-12 12:44:08.974588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.124 [2024-07-12 12:44:08.974611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.124 [2024-07-12 12:44:08.974634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.124 [2024-07-12 12:44:08.974657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.124 [2024-07-12 12:44:08.974679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.124 [2024-07-12 12:44:08.974702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.124 [2024-07-12 12:44:08.974724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.124 [2024-07-12 12:44:08.974746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.124 [2024-07-12 12:44:08.974768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.124 [2024-07-12 12:44:08.974790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.124 [2024-07-12 12:44:08.974813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.124 [2024-07-12 12:44:08.974835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.124 [2024-07-12 12:44:08.974865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.124 [2024-07-12 12:44:08.974888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.124 [2024-07-12 12:44:08.974910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.124 [2024-07-12 12:44:08.974932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.124 [2024-07-12 12:44:08.974958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.124 [2024-07-12 12:44:08.974981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.974993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.124 [2024-07-12 12:44:08.975003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.975015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.124 [2024-07-12 12:44:08.975025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.975037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.124 [2024-07-12 12:44:08.975047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.975059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.124 [2024-07-12 12:44:08.975070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.975081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.124 [2024-07-12 12:44:08.975091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.975104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.124 [2024-07-12 12:44:08.975114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.124 [2024-07-12 12:44:08.975127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.124 [2024-07-12 12:44:08.975138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.125 [2024-07-12 12:44:08.975249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.125 [2024-07-12 12:44:08.975271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:43.125 [2024-07-12 12:44:08.975469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.975988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.975999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.976012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.976022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.976034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.976044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.976056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.976067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.125 [2024-07-12 12:44:08.976079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.125 [2024-07-12 12:44:08.976089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.976983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.976995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.977005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.977017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.977027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.977039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.126 [2024-07-12 12:44:08.977049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.126 [2024-07-12 12:44:08.977061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:43.127 [2024-07-12 12:44:08.977477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbb4d0 is same with the state(5) to be set 00:19:43.127 [2024-07-12 12:44:08.977503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:43.127 [2024-07-12 12:44:08.977511] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:43.127 [2024-07-12 12:44:08.977520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66576 len:8 PRP1 0x0 PRP2 0x0 00:19:43.127 [2024-07-12 12:44:08.977530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.127 [2024-07-12 12:44:08.977584] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bbb4d0 was disconnected and freed. reset controller. 00:19:43.127 [2024-07-12 12:44:08.977846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:43.127 [2024-07-12 12:44:08.977923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b70d40 (9): Bad file descriptor 00:19:43.127 [2024-07-12 12:44:08.978035] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:43.127 [2024-07-12 12:44:08.978056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70d40 with addr=10.0.0.2, port=4420 00:19:43.127 [2024-07-12 12:44:08.978072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70d40 is same with the state(5) to be set 00:19:43.127 [2024-07-12 12:44:08.978090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b70d40 (9): Bad file descriptor 00:19:43.127 [2024-07-12 12:44:08.978106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:43.127 [2024-07-12 12:44:08.978116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:43.127 [2024-07-12 12:44:08.978127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:43.127 [2024-07-12 12:44:08.978147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:43.127 [2024-07-12 12:44:08.978159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:43.127 12:44:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:44.092 [2024-07-12 12:44:09.978313] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:44.092 [2024-07-12 12:44:09.978391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70d40 with addr=10.0.0.2, port=4420 00:19:44.092 [2024-07-12 12:44:09.978421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70d40 is same with the state(5) to be set 00:19:44.092 [2024-07-12 12:44:09.978452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b70d40 (9): Bad file descriptor 00:19:44.092 [2024-07-12 12:44:09.978473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:44.092 [2024-07-12 12:44:09.978483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:44.092 [2024-07-12 12:44:09.978495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:44.092 [2024-07-12 12:44:09.978524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:44.092 [2024-07-12 12:44:09.978536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:44.092 12:44:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:44.350 [2024-07-12 12:44:10.270013] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.350 12:44:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 82587 00:19:45.282 [2024-07-12 12:44:10.994333] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:51.915 00:19:51.915 Latency(us) 00:19:51.915 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.915 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:51.915 Verification LBA range: start 0x0 length 0x4000 00:19:51.915 NVMe0n1 : 10.01 5172.29 20.20 0.00 0.00 24693.56 1951.19 3019898.88 00:19:51.915 =================================================================================================================== 00:19:51.915 Total : 5172.29 20.20 0.00 0.00 24693.56 1951.19 3019898.88 00:19:51.915 0 00:19:51.915 12:44:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=82692 00:19:51.915 12:44:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:51.915 12:44:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:51.915 Running I/O for 10 seconds... 00:19:52.871 12:44:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.131 [2024-07-12 12:44:19.096049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.131 [2024-07-12 12:44:19.096397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.131 [2024-07-12 12:44:19.096446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.131 [2024-07-12 12:44:19.096476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.131 [2024-07-12 12:44:19.096513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.131 [2024-07-12 12:44:19.096543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.131 [2024-07-12 12:44:19.096581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.131 [2024-07-12 12:44:19.096610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.131 [2024-07-12 12:44:19.096640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.096983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.096996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.097012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.097026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.097041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.131 [2024-07-12 12:44:19.097054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.131 [2024-07-12 12:44:19.097070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.097083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.097114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.097144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.097179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.097216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.097252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.097281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.097311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.097342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.097372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.097413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.097445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.097474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.097503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.097532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.097561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.097590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.097621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.097651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.097681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.097712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.097742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.097770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:69984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.097799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.097828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:70000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.097859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.097888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.097917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.097947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:70032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.097976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.097992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:70040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.098005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.098021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.098034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.098050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:70056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.098063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.098080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.098093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.098109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.098122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.098138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.098152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.098168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.098182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.098197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.098210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.098226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.098239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.098255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.098268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.098284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.098299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.098315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.132 [2024-07-12 12:44:19.098328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.098344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.132 [2024-07-12 12:44:19.098358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.132 [2024-07-12 12:44:19.098374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.098387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:70088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.098431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.098463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.098492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:70112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.098521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:70120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.098550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:70128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.098579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:70136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.098609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:70144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.098640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:70152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.098669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.098698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.098728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.098757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.098801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.098832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.098862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.098892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.098921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.098950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.098979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.098994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.133 [2024-07-12 12:44:19.099579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.099609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:70208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.099637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.099665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.133 [2024-07-12 12:44:19.099680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.133 [2024-07-12 12:44:19.099693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.134 [2024-07-12 12:44:19.099708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.134 [2024-07-12 12:44:19.099721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.134 [2024-07-12 12:44:19.099735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:70240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.134 [2024-07-12 12:44:19.099748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.134 [2024-07-12 12:44:19.099763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:70248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.134 [2024-07-12 12:44:19.099783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.134 [2024-07-12 12:44:19.099797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:70256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:53.134 [2024-07-12 12:44:19.099810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.134 [2024-07-12 12:44:19.099825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.134 [2024-07-12 12:44:19.099839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.134 [2024-07-12 12:44:19.099854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.134 [2024-07-12 12:44:19.099867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.134 [2024-07-12 12:44:19.099882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.134 [2024-07-12 12:44:19.099898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.134 [2024-07-12 12:44:19.099918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.134 [2024-07-12 12:44:19.099934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.134 [2024-07-12 12:44:19.099953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.134 [2024-07-12 12:44:19.099969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.134 [2024-07-12 12:44:19.099987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.134 [2024-07-12 12:44:19.100003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.134 [2024-07-12 12:44:19.100023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:53.134 [2024-07-12 12:44:19.100039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.134 [2024-07-12 12:44:19.100077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:53.134 [2024-07-12 12:44:19.100088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:53.134 [2024-07-12 12:44:19.100097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69744 len:8 PRP1 0x0 PRP2 0x0 00:19:53.134 [2024-07-12 12:44:19.100107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:53.134 [2024-07-12 12:44:19.100160] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bba000 was disconnected and freed. reset controller. 00:19:53.134 [2024-07-12 12:44:19.100415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:53.134 [2024-07-12 12:44:19.100497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b70d40 (9): Bad file descriptor 00:19:53.134 [2024-07-12 12:44:19.100623] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:53.134 [2024-07-12 12:44:19.100643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70d40 with addr=10.0.0.2, port=4420 00:19:53.134 [2024-07-12 12:44:19.100654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70d40 is same with the state(5) to be set 00:19:53.134 [2024-07-12 12:44:19.100672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b70d40 (9): Bad file descriptor 00:19:53.134 [2024-07-12 12:44:19.100688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:53.134 [2024-07-12 12:44:19.100698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:53.134 [2024-07-12 12:44:19.100709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:53.134 [2024-07-12 12:44:19.100728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.134 [2024-07-12 12:44:19.100740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:53.134 12:44:19 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:54.065 [2024-07-12 12:44:20.100980] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:54.065 [2024-07-12 12:44:20.101090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70d40 with addr=10.0.0.2, port=4420 00:19:54.065 [2024-07-12 12:44:20.101109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70d40 is same with the state(5) to be set 00:19:54.065 [2024-07-12 12:44:20.101140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b70d40 (9): Bad file descriptor 00:19:54.065 [2024-07-12 12:44:20.101160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:54.065 [2024-07-12 12:44:20.101171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:54.065 [2024-07-12 12:44:20.101182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:54.065 [2024-07-12 12:44:20.101212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:54.065 [2024-07-12 12:44:20.101224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:55.435 [2024-07-12 12:44:21.101384] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:55.435 [2024-07-12 12:44:21.101474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70d40 with addr=10.0.0.2, port=4420 00:19:55.435 [2024-07-12 12:44:21.101492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70d40 is same with the state(5) to be set 00:19:55.435 [2024-07-12 12:44:21.101521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b70d40 (9): Bad file descriptor 00:19:55.435 [2024-07-12 12:44:21.101540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:55.435 [2024-07-12 12:44:21.101552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:55.435 [2024-07-12 12:44:21.101564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:55.435 [2024-07-12 12:44:21.101592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:55.435 [2024-07-12 12:44:21.101604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:56.375 [2024-07-12 12:44:22.105221] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:56.375 [2024-07-12 12:44:22.105309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b70d40 with addr=10.0.0.2, port=4420 00:19:56.375 [2024-07-12 12:44:22.105326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b70d40 is same with the state(5) to be set 00:19:56.375 [2024-07-12 12:44:22.105594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b70d40 (9): Bad file descriptor 00:19:56.375 [2024-07-12 12:44:22.105842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:56.375 [2024-07-12 12:44:22.105856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:56.375 [2024-07-12 12:44:22.105868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:56.375 [2024-07-12 12:44:22.109709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:56.375 [2024-07-12 12:44:22.109739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:56.375 12:44:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:56.375 [2024-07-12 12:44:22.352591] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.375 12:44:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 82692 00:19:57.348 [2024-07-12 12:44:23.142864] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:02.603 00:20:02.603 Latency(us) 00:20:02.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.603 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:02.603 Verification LBA range: start 0x0 length 0x4000 00:20:02.603 NVMe0n1 : 10.01 5291.67 20.67 3525.85 0.00 14486.70 681.43 3019898.88 00:20:02.603 =================================================================================================================== 00:20:02.603 Total : 5291.67 20.67 3525.85 0.00 14486.70 0.00 3019898.88 00:20:02.603 0 00:20:02.603 12:44:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 82564 00:20:02.603 12:44:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82564 ']' 00:20:02.603 12:44:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82564 00:20:02.603 12:44:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:02.603 12:44:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:02.603 12:44:27 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82564 00:20:02.603 killing process with pid 82564 00:20:02.603 Received shutdown signal, test time was about 10.000000 seconds 00:20:02.603 00:20:02.603 Latency(us) 00:20:02.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.603 =================================================================================================================== 00:20:02.603 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:02.604 12:44:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:02.604 12:44:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:02.604 12:44:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82564' 00:20:02.604 12:44:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82564 00:20:02.604 12:44:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82564 00:20:02.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:02.604 12:44:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:20:02.604 12:44:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82801 00:20:02.604 12:44:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82801 /var/tmp/bdevperf.sock 00:20:02.604 12:44:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@829 -- # '[' -z 82801 ']' 00:20:02.604 12:44:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:02.604 12:44:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.604 12:44:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:02.604 12:44:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.604 12:44:28 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:02.604 [2024-07-12 12:44:28.307191] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:20:02.604 [2024-07-12 12:44:28.307285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82801 ] 00:20:02.604 [2024-07-12 12:44:28.438468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.604 [2024-07-12 12:44:28.558281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:02.604 [2024-07-12 12:44:28.612330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:03.536 12:44:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.536 12:44:29 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@862 -- # return 0 00:20:03.536 12:44:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82817 00:20:03.536 12:44:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:20:03.536 12:44:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82801 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:20:03.536 12:44:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:20:03.793 NVMe0n1 00:20:04.050 12:44:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82864 00:20:04.050 12:44:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:04.050 12:44:29 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:20:04.050 Running I/O for 10 seconds... 00:20:04.981 12:44:30 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:05.242 [2024-07-12 12:44:31.139029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.242 [2024-07-12 12:44:31.139860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.242 [2024-07-12 12:44:31.139872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.139882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.139893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.139903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.139914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.139924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.139935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.139945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.139957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.139967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.139980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.139990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:40080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.243 [2024-07-12 12:44:31.140703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.243 [2024-07-12 12:44:31.140715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.140724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.140736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.140745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.140758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.140768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.140780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:30616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.140790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.140801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.140812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.140824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.140834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.140847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.140857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.140871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.140880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.140892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.140902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.140913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.140924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.140936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.140946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.140958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.140968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.140980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.140990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.244 [2024-07-12 12:44:31.141669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.244 [2024-07-12 12:44:31.141679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.141691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.245 [2024-07-12 12:44:31.141701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.141713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.245 [2024-07-12 12:44:31.141723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.141735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.245 [2024-07-12 12:44:31.141744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.141756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.245 [2024-07-12 12:44:31.141766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.141777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.245 [2024-07-12 12:44:31.141787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.141798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.245 [2024-07-12 12:44:31.141808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.141820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.245 [2024-07-12 12:44:31.141829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.141841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.245 [2024-07-12 12:44:31.141850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.141862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.245 [2024-07-12 12:44:31.141871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.141883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.245 [2024-07-12 12:44:31.141892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.141904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.245 [2024-07-12 12:44:31.141913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.141925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.245 [2024-07-12 12:44:31.141935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.141947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.245 [2024-07-12 12:44:31.141963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.141974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8310 is same with the state(5) to be set 00:20:05.245 [2024-07-12 12:44:31.141987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:05.245 [2024-07-12 12:44:31.142002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:05.245 [2024-07-12 12:44:31.142012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60536 len:8 PRP1 0x0 PRP2 0x0 00:20:05.245 [2024-07-12 12:44:31.142022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.142076] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8c8310 was disconnected and freed. reset controller. 00:20:05.245 [2024-07-12 12:44:31.142160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.245 [2024-07-12 12:44:31.142176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.142188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.245 [2024-07-12 12:44:31.142197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.142208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.245 [2024-07-12 12:44:31.142218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.142228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.245 [2024-07-12 12:44:31.142238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.245 [2024-07-12 12:44:31.142247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859c00 is same with the state(5) to be set 00:20:05.245 [2024-07-12 12:44:31.142515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:05.245 [2024-07-12 12:44:31.142542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x859c00 (9): Bad file descriptor 00:20:05.245 [2024-07-12 12:44:31.142663] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:05.245 [2024-07-12 12:44:31.142685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x859c00 with addr=10.0.0.2, port=4420 00:20:05.245 [2024-07-12 12:44:31.142696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859c00 is same with the state(5) to be set 00:20:05.245 [2024-07-12 12:44:31.142714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x859c00 (9): Bad file descriptor 00:20:05.245 [2024-07-12 12:44:31.142730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:05.245 [2024-07-12 12:44:31.142740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:05.245 [2024-07-12 12:44:31.142751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:05.245 [2024-07-12 12:44:31.142772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:05.245 [2024-07-12 12:44:31.142783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:05.245 12:44:31 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 82864 00:20:07.144 [2024-07-12 12:44:33.143160] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:07.144 [2024-07-12 12:44:33.143238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x859c00 with addr=10.0.0.2, port=4420 00:20:07.144 [2024-07-12 12:44:33.143256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859c00 is same with the state(5) to be set 00:20:07.144 [2024-07-12 12:44:33.143285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x859c00 (9): Bad file descriptor 00:20:07.144 [2024-07-12 12:44:33.143306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:07.144 [2024-07-12 12:44:33.143317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:07.144 [2024-07-12 12:44:33.143328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:07.144 [2024-07-12 12:44:33.143362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.144 [2024-07-12 12:44:33.143375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:09.670 [2024-07-12 12:44:35.143695] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:20:09.670 [2024-07-12 12:44:35.143776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x859c00 with addr=10.0.0.2, port=4420 00:20:09.670 [2024-07-12 12:44:35.143793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859c00 is same with the state(5) to be set 00:20:09.670 [2024-07-12 12:44:35.143824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x859c00 (9): Bad file descriptor 00:20:09.670 [2024-07-12 12:44:35.143845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:09.670 [2024-07-12 12:44:35.143856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:09.670 [2024-07-12 12:44:35.143867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:09.670 [2024-07-12 12:44:35.143897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:09.670 [2024-07-12 12:44:35.143909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:11.089 [2024-07-12 12:44:37.143985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:11.089 [2024-07-12 12:44:37.144051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:11.089 [2024-07-12 12:44:37.144064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:11.089 [2024-07-12 12:44:37.144076] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:20:11.089 [2024-07-12 12:44:37.144110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:12.461 00:20:12.461 Latency(us) 00:20:12.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.461 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:20:12.461 NVMe0n1 : 8.13 2045.53 7.99 15.74 0.00 62043.52 8281.37 7015926.69 00:20:12.461 =================================================================================================================== 00:20:12.461 Total : 2045.53 7.99 15.74 0.00 62043.52 8281.37 7015926.69 00:20:12.461 0 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:12.461 Attaching 5 probes... 00:20:12.461 1297.877667: reset bdev controller NVMe0 00:20:12.461 1297.964750: reconnect bdev controller NVMe0 00:20:12.461 3298.348011: reconnect delay bdev controller NVMe0 00:20:12.461 3298.378309: reconnect bdev controller NVMe0 00:20:12.461 5298.830113: reconnect delay bdev controller NVMe0 00:20:12.461 5298.873699: reconnect bdev controller NVMe0 00:20:12.461 7299.327577: reconnect delay bdev controller NVMe0 00:20:12.461 7299.353630: reconnect bdev controller NVMe0 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 82817 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82801 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82801 ']' 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82801 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82801 00:20:12.461 killing process with pid 82801 00:20:12.461 Received shutdown signal, test time was about 8.186322 seconds 00:20:12.461 00:20:12.461 Latency(us) 00:20:12.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.461 =================================================================================================================== 00:20:12.461 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82801' 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82801 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82801 00:20:12.461 12:44:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:12.717 12:44:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:20:12.717 12:44:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:20:12.717 12:44:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:12.717 12:44:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:12.973 rmmod nvme_tcp 00:20:12.973 rmmod nvme_fabrics 00:20:12.973 rmmod nvme_keyring 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 82369 ']' 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 82369 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@948 -- # '[' -z 82369 ']' 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # kill -0 82369 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # uname 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82369 00:20:12.973 killing process with pid 82369 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82369' 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@967 -- # kill 82369 00:20:12.973 12:44:38 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@972 -- # wait 82369 00:20:13.230 12:44:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:13.230 12:44:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:13.230 12:44:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:13.230 12:44:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:13.230 12:44:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:13.230 12:44:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.230 12:44:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.230 12:44:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.230 12:44:39 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:13.230 ************************************ 00:20:13.230 END TEST nvmf_timeout 00:20:13.230 ************************************ 00:20:13.230 00:20:13.230 real 0m47.605s 00:20:13.230 user 2m19.645s 00:20:13.230 sys 0m5.981s 00:20:13.230 12:44:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:13.230 12:44:39 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:13.230 12:44:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:13.230 12:44:39 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ virt == phy ]] 00:20:13.230 12:44:39 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:20:13.230 12:44:39 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:13.230 12:44:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:13.230 12:44:39 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:20:13.230 ************************************ 00:20:13.230 END TEST nvmf_tcp 00:20:13.230 ************************************ 00:20:13.230 00:20:13.230 real 12m20.226s 00:20:13.230 user 30m3.390s 00:20:13.230 sys 3m5.956s 00:20:13.230 12:44:39 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:13.230 12:44:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:13.487 12:44:39 -- common/autotest_common.sh@1142 -- # return 0 00:20:13.487 12:44:39 -- spdk/autotest.sh@288 -- # [[ 1 -eq 0 ]] 00:20:13.487 12:44:39 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:13.487 12:44:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:13.487 12:44:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:13.487 12:44:39 -- common/autotest_common.sh@10 -- # set +x 00:20:13.487 ************************************ 00:20:13.487 START TEST nvmf_dif 00:20:13.487 ************************************ 00:20:13.487 12:44:39 nvmf_dif -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:20:13.487 * Looking for test storage... 00:20:13.487 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:13.487 12:44:39 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:13.487 12:44:39 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.487 12:44:39 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.487 12:44:39 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.487 12:44:39 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.487 12:44:39 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.487 12:44:39 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.487 12:44:39 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:20:13.487 12:44:39 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.487 12:44:39 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:13.488 12:44:39 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:20:13.488 12:44:39 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:20:13.488 12:44:39 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:20:13.488 12:44:39 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:20:13.488 12:44:39 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.488 12:44:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:13.488 12:44:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:13.488 Cannot find device "nvmf_tgt_br" 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@155 -- # true 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:13.488 Cannot find device "nvmf_tgt_br2" 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@156 -- # true 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:13.488 Cannot find device "nvmf_tgt_br" 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@158 -- # true 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:13.488 Cannot find device "nvmf_tgt_br2" 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@159 -- # true 00:20:13.488 12:44:39 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:13.745 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@162 -- # true 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:13.745 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@163 -- # true 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:13.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:20:13.745 00:20:13.745 --- 10.0.0.2 ping statistics --- 00:20:13.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.745 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:13.745 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:13.745 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:20:13.745 00:20:13.745 --- 10.0.0.3 ping statistics --- 00:20:13.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.745 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:13.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:20:13.745 00:20:13.745 --- 10.0.0.1 ping statistics --- 00:20:13.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.745 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:20:13.745 12:44:39 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:14.003 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:14.003 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:14.003 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:14.260 12:44:40 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.260 12:44:40 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:14.260 12:44:40 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:14.260 12:44:40 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.260 12:44:40 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:14.260 12:44:40 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:14.260 12:44:40 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:20:14.260 12:44:40 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:20:14.260 12:44:40 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:14.260 12:44:40 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.261 12:44:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:14.261 12:44:40 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=83302 00:20:14.261 12:44:40 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:14.261 12:44:40 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 83302 00:20:14.261 12:44:40 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 83302 ']' 00:20:14.261 12:44:40 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.261 12:44:40 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.261 12:44:40 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.261 12:44:40 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.261 12:44:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:14.261 [2024-07-12 12:44:40.166306] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:20:14.261 [2024-07-12 12:44:40.166422] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.261 [2024-07-12 12:44:40.302194] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.518 [2024-07-12 12:44:40.453567] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.518 [2024-07-12 12:44:40.453663] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.518 [2024-07-12 12:44:40.453678] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.518 [2024-07-12 12:44:40.453691] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.518 [2024-07-12 12:44:40.453703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.518 [2024-07-12 12:44:40.453756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.518 [2024-07-12 12:44:40.518187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:20:15.449 12:44:41 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.449 12:44:41 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:20:15.449 12:44:41 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:15.449 12:44:41 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:15.449 12:44:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:15.449 12:44:41 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.449 12:44:41 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:20:15.449 12:44:41 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:20:15.449 12:44:41 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.449 12:44:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:15.449 [2024-07-12 12:44:41.265431] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.449 12:44:41 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.449 12:44:41 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:20:15.449 12:44:41 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:15.449 12:44:41 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:15.449 12:44:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:15.449 ************************************ 00:20:15.449 START TEST fio_dif_1_default 00:20:15.449 ************************************ 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:15.449 bdev_null0 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:15.449 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:15.450 [2024-07-12 12:44:41.305584] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:15.450 { 00:20:15.450 "params": { 00:20:15.450 "name": "Nvme$subsystem", 00:20:15.450 "trtype": "$TEST_TRANSPORT", 00:20:15.450 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:15.450 "adrfam": "ipv4", 00:20:15.450 "trsvcid": "$NVMF_PORT", 00:20:15.450 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:15.450 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:15.450 "hdgst": ${hdgst:-false}, 00:20:15.450 "ddgst": ${ddgst:-false} 00:20:15.450 }, 00:20:15.450 "method": "bdev_nvme_attach_controller" 00:20:15.450 } 00:20:15.450 EOF 00:20:15.450 )") 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:15.450 "params": { 00:20:15.450 "name": "Nvme0", 00:20:15.450 "trtype": "tcp", 00:20:15.450 "traddr": "10.0.0.2", 00:20:15.450 "adrfam": "ipv4", 00:20:15.450 "trsvcid": "4420", 00:20:15.450 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:15.450 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:15.450 "hdgst": false, 00:20:15.450 "ddgst": false 00:20:15.450 }, 00:20:15.450 "method": "bdev_nvme_attach_controller" 00:20:15.450 }' 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:15.450 12:44:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:15.450 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:15.450 fio-3.35 00:20:15.450 Starting 1 thread 00:20:27.647 00:20:27.647 filename0: (groupid=0, jobs=1): err= 0: pid=83369: Fri Jul 12 12:44:52 2024 00:20:27.647 read: IOPS=8683, BW=33.9MiB/s (35.6MB/s)(339MiB/10001msec) 00:20:27.647 slat (usec): min=6, max=762, avg= 8.64, stdev= 4.67 00:20:27.647 clat (usec): min=353, max=3612, avg=435.48, stdev=35.73 00:20:27.647 lat (usec): min=359, max=3648, avg=444.12, stdev=36.43 00:20:27.647 clat percentiles (usec): 00:20:27.647 | 1.00th=[ 383], 5.00th=[ 404], 10.00th=[ 408], 20.00th=[ 416], 00:20:27.647 | 30.00th=[ 424], 40.00th=[ 429], 50.00th=[ 433], 60.00th=[ 437], 00:20:27.647 | 70.00th=[ 445], 80.00th=[ 449], 90.00th=[ 461], 95.00th=[ 478], 00:20:27.647 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 644], 99.95th=[ 742], 00:20:27.647 | 99.99th=[ 1172] 00:20:27.647 bw ( KiB/s): min=33536, max=35456, per=100.00%, avg=34837.58, stdev=378.98, samples=19 00:20:27.647 iops : min= 8384, max= 8864, avg=8709.37, stdev=94.76, samples=19 00:20:27.647 lat (usec) : 500=97.94%, 750=2.02%, 1000=0.03% 00:20:27.647 lat (msec) : 2=0.02%, 4=0.01% 00:20:27.647 cpu : usr=84.17%, sys=13.76%, ctx=121, majf=0, minf=0 00:20:27.647 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:27.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.647 issued rwts: total=86848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.647 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:27.647 00:20:27.647 Run status group 0 (all jobs): 00:20:27.647 READ: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=339MiB (356MB), run=10001-10001msec 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.647 00:20:27.647 real 0m11.031s 00:20:27.647 user 0m9.083s 00:20:27.647 sys 0m1.640s 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:27.647 ************************************ 00:20:27.647 END TEST fio_dif_1_default 00:20:27.647 ************************************ 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:20:27.647 12:44:52 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:27.647 12:44:52 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:27.647 12:44:52 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:27.647 12:44:52 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:27.647 12:44:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:27.647 ************************************ 00:20:27.647 START TEST fio_dif_1_multi_subsystems 00:20:27.647 ************************************ 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:27.647 bdev_null0 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:27.647 [2024-07-12 12:44:52.391450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:27.647 bdev_null1 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:27.647 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.648 { 00:20:27.648 "params": { 00:20:27.648 "name": "Nvme$subsystem", 00:20:27.648 "trtype": "$TEST_TRANSPORT", 00:20:27.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.648 "adrfam": "ipv4", 00:20:27.648 "trsvcid": "$NVMF_PORT", 00:20:27.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.648 "hdgst": ${hdgst:-false}, 00:20:27.648 "ddgst": ${ddgst:-false} 00:20:27.648 }, 00:20:27.648 "method": "bdev_nvme_attach_controller" 00:20:27.648 } 00:20:27.648 EOF 00:20:27.648 )") 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:27.648 { 00:20:27.648 "params": { 00:20:27.648 "name": "Nvme$subsystem", 00:20:27.648 "trtype": "$TEST_TRANSPORT", 00:20:27.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.648 "adrfam": "ipv4", 00:20:27.648 "trsvcid": "$NVMF_PORT", 00:20:27.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.648 "hdgst": ${hdgst:-false}, 00:20:27.648 "ddgst": ${ddgst:-false} 00:20:27.648 }, 00:20:27.648 "method": "bdev_nvme_attach_controller" 00:20:27.648 } 00:20:27.648 EOF 00:20:27.648 )") 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:27.648 "params": { 00:20:27.648 "name": "Nvme0", 00:20:27.648 "trtype": "tcp", 00:20:27.648 "traddr": "10.0.0.2", 00:20:27.648 "adrfam": "ipv4", 00:20:27.648 "trsvcid": "4420", 00:20:27.648 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:27.648 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:27.648 "hdgst": false, 00:20:27.648 "ddgst": false 00:20:27.648 }, 00:20:27.648 "method": "bdev_nvme_attach_controller" 00:20:27.648 },{ 00:20:27.648 "params": { 00:20:27.648 "name": "Nvme1", 00:20:27.648 "trtype": "tcp", 00:20:27.648 "traddr": "10.0.0.2", 00:20:27.648 "adrfam": "ipv4", 00:20:27.648 "trsvcid": "4420", 00:20:27.648 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.648 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.648 "hdgst": false, 00:20:27.648 "ddgst": false 00:20:27.648 }, 00:20:27.648 "method": "bdev_nvme_attach_controller" 00:20:27.648 }' 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:27.648 12:44:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:27.648 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:27.648 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:27.648 fio-3.35 00:20:27.648 Starting 2 threads 00:20:37.627 00:20:37.627 filename0: (groupid=0, jobs=1): err= 0: pid=83528: Fri Jul 12 12:45:03 2024 00:20:37.627 read: IOPS=4715, BW=18.4MiB/s (19.3MB/s)(184MiB/10001msec) 00:20:37.627 slat (nsec): min=6928, max=53836, avg=13957.66, stdev=3848.97 00:20:37.627 clat (usec): min=504, max=2338, avg=809.30, stdev=41.42 00:20:37.627 lat (usec): min=512, max=2355, avg=823.26, stdev=41.97 00:20:37.627 clat percentiles (usec): 00:20:37.627 | 1.00th=[ 742], 5.00th=[ 758], 10.00th=[ 766], 20.00th=[ 783], 00:20:37.627 | 30.00th=[ 791], 40.00th=[ 799], 50.00th=[ 807], 60.00th=[ 816], 00:20:37.627 | 70.00th=[ 824], 80.00th=[ 840], 90.00th=[ 857], 95.00th=[ 881], 00:20:37.627 | 99.00th=[ 930], 99.50th=[ 947], 99.90th=[ 1004], 99.95th=[ 1045], 00:20:37.627 | 99.99th=[ 1352] 00:20:37.627 bw ( KiB/s): min=18368, max=19360, per=50.00%, avg=18864.84, stdev=276.18, samples=19 00:20:37.627 iops : min= 4592, max= 4840, avg=4716.21, stdev=69.04, samples=19 00:20:37.627 lat (usec) : 750=2.57%, 1000=97.33% 00:20:37.627 lat (msec) : 2=0.09%, 4=0.01% 00:20:37.627 cpu : usr=89.72%, sys=8.85%, ctx=11, majf=0, minf=9 00:20:37.627 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:37.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.627 issued rwts: total=47164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.627 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:37.627 filename1: (groupid=0, jobs=1): err= 0: pid=83529: Fri Jul 12 12:45:03 2024 00:20:37.627 read: IOPS=4715, BW=18.4MiB/s (19.3MB/s)(184MiB/10001msec) 00:20:37.627 slat (nsec): min=7072, max=96359, avg=13686.93, stdev=3749.43 00:20:37.627 clat (usec): min=545, max=2350, avg=810.93, stdev=51.03 00:20:37.627 lat (usec): min=553, max=2363, avg=824.62, stdev=52.25 00:20:37.627 clat percentiles (usec): 00:20:37.627 | 1.00th=[ 701], 5.00th=[ 725], 10.00th=[ 750], 20.00th=[ 775], 00:20:37.627 | 30.00th=[ 791], 40.00th=[ 799], 50.00th=[ 807], 60.00th=[ 824], 00:20:37.627 | 70.00th=[ 832], 80.00th=[ 848], 90.00th=[ 873], 95.00th=[ 889], 00:20:37.627 | 99.00th=[ 938], 99.50th=[ 963], 99.90th=[ 1020], 99.95th=[ 1172], 00:20:37.627 | 99.99th=[ 1352] 00:20:37.627 bw ( KiB/s): min=18368, max=19360, per=50.00%, avg=18863.16, stdev=277.41, samples=19 00:20:37.627 iops : min= 4592, max= 4840, avg=4715.79, stdev=69.35, samples=19 00:20:37.627 lat (usec) : 750=10.86%, 1000=88.99% 00:20:37.627 lat (msec) : 2=0.14%, 4=0.01% 00:20:37.627 cpu : usr=89.50%, sys=9.00%, ctx=80, majf=0, minf=0 00:20:37.627 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:37.627 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.627 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:37.627 issued rwts: total=47160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:37.627 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:37.627 00:20:37.627 Run status group 0 (all jobs): 00:20:37.627 READ: bw=36.8MiB/s (38.6MB/s), 18.4MiB/s-18.4MiB/s (19.3MB/s-19.3MB/s), io=368MiB (386MB), run=10001-10001msec 00:20:37.627 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:37.627 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:20:37.627 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:37.627 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:37.627 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:20:37.627 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:37.627 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.627 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:37.627 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.627 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:37.628 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.628 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:37.628 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.628 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:20:37.628 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:37.628 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:20:37.628 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:37.628 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.628 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:37.628 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.628 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:37.628 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.628 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:37.628 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.628 00:20:37.628 real 0m11.198s 00:20:37.628 user 0m18.697s 00:20:37.628 sys 0m2.110s 00:20:37.628 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:37.628 12:45:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:20:37.628 ************************************ 00:20:37.628 END TEST fio_dif_1_multi_subsystems 00:20:37.628 ************************************ 00:20:37.628 12:45:03 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:20:37.628 12:45:03 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:37.628 12:45:03 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:37.628 12:45:03 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.628 12:45:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:37.628 ************************************ 00:20:37.628 START TEST fio_dif_rand_params 00:20:37.628 ************************************ 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.628 bdev_null0 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:37.628 [2024-07-12 12:45:03.643943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:37.628 { 00:20:37.628 "params": { 00:20:37.628 "name": "Nvme$subsystem", 00:20:37.628 "trtype": "$TEST_TRANSPORT", 00:20:37.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.628 "adrfam": "ipv4", 00:20:37.628 "trsvcid": "$NVMF_PORT", 00:20:37.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.628 "hdgst": ${hdgst:-false}, 00:20:37.628 "ddgst": ${ddgst:-false} 00:20:37.628 }, 00:20:37.628 "method": "bdev_nvme_attach_controller" 00:20:37.628 } 00:20:37.628 EOF 00:20:37.628 )") 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:37.628 "params": { 00:20:37.628 "name": "Nvme0", 00:20:37.628 "trtype": "tcp", 00:20:37.628 "traddr": "10.0.0.2", 00:20:37.628 "adrfam": "ipv4", 00:20:37.628 "trsvcid": "4420", 00:20:37.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:37.628 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:37.628 "hdgst": false, 00:20:37.628 "ddgst": false 00:20:37.628 }, 00:20:37.628 "method": "bdev_nvme_attach_controller" 00:20:37.628 }' 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:37.628 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:37.886 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:37.886 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:37.886 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:37.886 12:45:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:37.886 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:37.886 ... 00:20:37.886 fio-3.35 00:20:37.886 Starting 3 threads 00:20:44.446 00:20:44.446 filename0: (groupid=0, jobs=1): err= 0: pid=83685: Fri Jul 12 12:45:09 2024 00:20:44.446 read: IOPS=255, BW=31.9MiB/s (33.5MB/s)(160MiB/5001msec) 00:20:44.446 slat (nsec): min=7421, max=35850, avg=10453.72, stdev=3567.18 00:20:44.446 clat (usec): min=8107, max=14250, avg=11711.76, stdev=304.92 00:20:44.446 lat (usec): min=8115, max=14267, avg=11722.21, stdev=305.16 00:20:44.446 clat percentiles (usec): 00:20:44.446 | 1.00th=[11469], 5.00th=[11600], 10.00th=[11600], 20.00th=[11600], 00:20:44.446 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:20:44.446 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[11994], 00:20:44.446 | 99.00th=[12518], 99.50th=[13042], 99.90th=[14222], 99.95th=[14222], 00:20:44.446 | 99.99th=[14222] 00:20:44.446 bw ( KiB/s): min=32256, max=33024, per=33.32%, avg=32689.78, stdev=396.82, samples=9 00:20:44.446 iops : min= 252, max= 258, avg=255.33, stdev= 3.16, samples=9 00:20:44.446 lat (msec) : 10=0.23%, 20=99.77% 00:20:44.446 cpu : usr=90.72%, sys=8.66%, ctx=11, majf=0, minf=9 00:20:44.446 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:44.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.446 issued rwts: total=1278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.446 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:44.446 filename0: (groupid=0, jobs=1): err= 0: pid=83686: Fri Jul 12 12:45:09 2024 00:20:44.446 read: IOPS=255, BW=31.9MiB/s (33.5MB/s)(160MiB/5003msec) 00:20:44.446 slat (nsec): min=7460, max=33186, avg=10789.57, stdev=3799.18 00:20:44.446 clat (usec): min=11506, max=14016, avg=11716.02, stdev=222.22 00:20:44.446 lat (usec): min=11514, max=14034, avg=11726.81, stdev=222.55 00:20:44.446 clat percentiles (usec): 00:20:44.446 | 1.00th=[11469], 5.00th=[11600], 10.00th=[11600], 20.00th=[11600], 00:20:44.446 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:20:44.446 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:20:44.446 | 99.00th=[12256], 99.50th=[13042], 99.90th=[13960], 99.95th=[13960], 00:20:44.446 | 99.99th=[13960] 00:20:44.446 bw ( KiB/s): min=32256, max=33024, per=33.32%, avg=32682.67, stdev=404.77, samples=9 00:20:44.446 iops : min= 252, max= 258, avg=255.33, stdev= 3.16, samples=9 00:20:44.446 lat (msec) : 20=100.00% 00:20:44.446 cpu : usr=88.96%, sys=10.46%, ctx=6, majf=0, minf=9 00:20:44.446 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:44.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.446 issued rwts: total=1278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.446 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:44.446 filename0: (groupid=0, jobs=1): err= 0: pid=83687: Fri Jul 12 12:45:09 2024 00:20:44.446 read: IOPS=255, BW=31.9MiB/s (33.5MB/s)(160MiB/5002msec) 00:20:44.446 slat (nsec): min=7334, max=34505, avg=10581.09, stdev=3684.56 00:20:44.446 clat (usec): min=10059, max=14216, avg=11713.56, stdev=239.71 00:20:44.446 lat (usec): min=10067, max=14236, avg=11724.14, stdev=240.03 00:20:44.446 clat percentiles (usec): 00:20:44.446 | 1.00th=[11469], 5.00th=[11600], 10.00th=[11600], 20.00th=[11600], 00:20:44.446 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:20:44.446 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:20:44.446 | 99.00th=[12256], 99.50th=[13042], 99.90th=[14222], 99.95th=[14222], 00:20:44.446 | 99.99th=[14222] 00:20:44.446 bw ( KiB/s): min=32256, max=33024, per=33.32%, avg=32682.67, stdev=404.77, samples=9 00:20:44.446 iops : min= 252, max= 258, avg=255.33, stdev= 3.16, samples=9 00:20:44.447 lat (msec) : 20=100.00% 00:20:44.447 cpu : usr=90.94%, sys=8.46%, ctx=8, majf=0, minf=0 00:20:44.447 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:44.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.447 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:44.447 issued rwts: total=1278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:44.447 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:44.447 00:20:44.447 Run status group 0 (all jobs): 00:20:44.447 READ: bw=95.8MiB/s (100MB/s), 31.9MiB/s-31.9MiB/s (33.5MB/s-33.5MB/s), io=479MiB (503MB), run=5001-5003msec 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.447 bdev_null0 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.447 [2024-07-12 12:45:09.708328] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.447 bdev_null1 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.447 bdev_null2 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:44.447 { 00:20:44.447 "params": { 00:20:44.447 "name": "Nvme$subsystem", 00:20:44.447 "trtype": "$TEST_TRANSPORT", 00:20:44.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.447 "adrfam": "ipv4", 00:20:44.447 "trsvcid": "$NVMF_PORT", 00:20:44.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.447 "hdgst": ${hdgst:-false}, 00:20:44.447 "ddgst": ${ddgst:-false} 00:20:44.447 }, 00:20:44.447 "method": "bdev_nvme_attach_controller" 00:20:44.447 } 00:20:44.447 EOF 00:20:44.447 )") 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:44.447 12:45:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:44.447 { 00:20:44.447 "params": { 00:20:44.447 "name": "Nvme$subsystem", 00:20:44.447 "trtype": "$TEST_TRANSPORT", 00:20:44.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.447 "adrfam": "ipv4", 00:20:44.447 "trsvcid": "$NVMF_PORT", 00:20:44.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.448 "hdgst": ${hdgst:-false}, 00:20:44.448 "ddgst": ${ddgst:-false} 00:20:44.448 }, 00:20:44.448 "method": "bdev_nvme_attach_controller" 00:20:44.448 } 00:20:44.448 EOF 00:20:44.448 )") 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:44.448 { 00:20:44.448 "params": { 00:20:44.448 "name": "Nvme$subsystem", 00:20:44.448 "trtype": "$TEST_TRANSPORT", 00:20:44.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.448 "adrfam": "ipv4", 00:20:44.448 "trsvcid": "$NVMF_PORT", 00:20:44.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.448 "hdgst": ${hdgst:-false}, 00:20:44.448 "ddgst": ${ddgst:-false} 00:20:44.448 }, 00:20:44.448 "method": "bdev_nvme_attach_controller" 00:20:44.448 } 00:20:44.448 EOF 00:20:44.448 )") 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:44.448 "params": { 00:20:44.448 "name": "Nvme0", 00:20:44.448 "trtype": "tcp", 00:20:44.448 "traddr": "10.0.0.2", 00:20:44.448 "adrfam": "ipv4", 00:20:44.448 "trsvcid": "4420", 00:20:44.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:44.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:44.448 "hdgst": false, 00:20:44.448 "ddgst": false 00:20:44.448 }, 00:20:44.448 "method": "bdev_nvme_attach_controller" 00:20:44.448 },{ 00:20:44.448 "params": { 00:20:44.448 "name": "Nvme1", 00:20:44.448 "trtype": "tcp", 00:20:44.448 "traddr": "10.0.0.2", 00:20:44.448 "adrfam": "ipv4", 00:20:44.448 "trsvcid": "4420", 00:20:44.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.448 "hdgst": false, 00:20:44.448 "ddgst": false 00:20:44.448 }, 00:20:44.448 "method": "bdev_nvme_attach_controller" 00:20:44.448 },{ 00:20:44.448 "params": { 00:20:44.448 "name": "Nvme2", 00:20:44.448 "trtype": "tcp", 00:20:44.448 "traddr": "10.0.0.2", 00:20:44.448 "adrfam": "ipv4", 00:20:44.448 "trsvcid": "4420", 00:20:44.448 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:44.448 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:44.448 "hdgst": false, 00:20:44.448 "ddgst": false 00:20:44.448 }, 00:20:44.448 "method": "bdev_nvme_attach_controller" 00:20:44.448 }' 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:44.448 12:45:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:44.448 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:44.448 ... 00:20:44.448 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:44.448 ... 00:20:44.448 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:44.448 ... 00:20:44.448 fio-3.35 00:20:44.448 Starting 24 threads 00:20:56.645 00:20:56.645 filename0: (groupid=0, jobs=1): err= 0: pid=83782: Fri Jul 12 12:45:20 2024 00:20:56.645 read: IOPS=192, BW=768KiB/s (787kB/s)(7708KiB/10032msec) 00:20:56.645 slat (usec): min=5, max=8071, avg=28.98, stdev=268.81 00:20:56.645 clat (msec): min=39, max=167, avg=83.03, stdev=21.76 00:20:56.645 lat (msec): min=39, max=167, avg=83.06, stdev=21.76 00:20:56.645 clat percentiles (msec): 00:20:56.645 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 63], 00:20:56.645 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 83], 60.00th=[ 94], 00:20:56.645 | 70.00th=[ 102], 80.00th=[ 106], 90.00th=[ 109], 95.00th=[ 111], 00:20:56.645 | 99.00th=[ 125], 99.50th=[ 125], 99.90th=[ 167], 99.95th=[ 167], 00:20:56.645 | 99.99th=[ 167] 00:20:56.645 bw ( KiB/s): min= 512, max= 968, per=4.03%, avg=766.80, stdev=162.47, samples=20 00:20:56.645 iops : min= 128, max= 242, avg=191.70, stdev=40.62, samples=20 00:20:56.646 lat (msec) : 50=10.79%, 100=58.69%, 250=30.51% 00:20:56.646 cpu : usr=42.97%, sys=2.98%, ctx=1232, majf=0, minf=9 00:20:56.646 IO depths : 1=0.1%, 2=3.0%, 4=11.8%, 8=70.8%, 16=14.3%, 32=0.0%, >=64=0.0% 00:20:56.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.646 complete : 0=0.0%, 4=90.3%, 8=7.1%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.646 issued rwts: total=1927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.646 filename0: (groupid=0, jobs=1): err= 0: pid=83783: Fri Jul 12 12:45:20 2024 00:20:56.646 read: IOPS=202, BW=810KiB/s (830kB/s)(8140KiB/10045msec) 00:20:56.646 slat (usec): min=7, max=12022, avg=29.38, stdev=376.42 00:20:56.646 clat (msec): min=12, max=155, avg=78.76, stdev=21.04 00:20:56.646 lat (msec): min=12, max=155, avg=78.79, stdev=21.03 00:20:56.646 clat percentiles (msec): 00:20:56.646 | 1.00th=[ 31], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 61], 00:20:56.646 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 84], 00:20:56.646 | 70.00th=[ 96], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 109], 00:20:56.646 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 144], 00:20:56.646 | 99.99th=[ 157] 00:20:56.646 bw ( KiB/s): min= 640, max= 1016, per=4.25%, avg=807.60, stdev=111.11, samples=20 00:20:56.646 iops : min= 160, max= 254, avg=201.90, stdev=27.78, samples=20 00:20:56.646 lat (msec) : 20=0.88%, 50=10.02%, 100=72.24%, 250=16.86% 00:20:56.646 cpu : usr=32.46%, sys=1.91%, ctx=893, majf=0, minf=9 00:20:56.646 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.5%, 16=16.8%, 32=0.0%, >=64=0.0% 00:20:56.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.646 complete : 0=0.0%, 4=87.8%, 8=12.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.646 issued rwts: total=2035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.646 filename0: (groupid=0, jobs=1): err= 0: pid=83784: Fri Jul 12 12:45:20 2024 00:20:56.646 read: IOPS=203, BW=813KiB/s (833kB/s)(8156KiB/10027msec) 00:20:56.646 slat (usec): min=4, max=4024, avg=17.22, stdev=88.91 00:20:56.646 clat (msec): min=35, max=167, avg=78.52, stdev=22.67 00:20:56.646 lat (msec): min=35, max=167, avg=78.54, stdev=22.67 00:20:56.646 clat percentiles (msec): 00:20:56.646 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 57], 00:20:56.646 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:20:56.646 | 70.00th=[ 96], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 109], 00:20:56.646 | 99.00th=[ 132], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 167], 00:20:56.646 | 99.99th=[ 167] 00:20:56.646 bw ( KiB/s): min= 496, max= 1024, per=4.27%, avg=811.60, stdev=153.17, samples=20 00:20:56.646 iops : min= 124, max= 256, avg=202.90, stdev=38.29, samples=20 00:20:56.646 lat (msec) : 50=16.38%, 100=66.60%, 250=17.02% 00:20:56.646 cpu : usr=33.56%, sys=2.29%, ctx=946, majf=0, minf=9 00:20:56.646 IO depths : 1=0.1%, 2=0.8%, 4=3.1%, 8=80.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:56.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.646 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.646 issued rwts: total=2039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.646 filename0: (groupid=0, jobs=1): err= 0: pid=83785: Fri Jul 12 12:45:20 2024 00:20:56.646 read: IOPS=194, BW=778KiB/s (796kB/s)(7788KiB/10013msec) 00:20:56.646 slat (usec): min=3, max=8028, avg=38.06, stdev=422.85 00:20:56.646 clat (msec): min=15, max=230, avg=82.05, stdev=26.59 00:20:56.646 lat (msec): min=15, max=230, avg=82.08, stdev=26.60 00:20:56.646 clat percentiles (msec): 00:20:56.646 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 58], 00:20:56.646 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 89], 00:20:56.646 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 109], 95.00th=[ 121], 00:20:56.646 | 99.00th=[ 155], 99.50th=[ 209], 99.90th=[ 232], 99.95th=[ 232], 00:20:56.646 | 99.99th=[ 232] 00:20:56.646 bw ( KiB/s): min= 496, max= 1024, per=4.08%, avg=774.65, stdev=190.37, samples=20 00:20:56.646 iops : min= 124, max= 256, avg=193.65, stdev=47.60, samples=20 00:20:56.646 lat (msec) : 20=0.31%, 50=15.36%, 100=61.12%, 250=23.22% 00:20:56.646 cpu : usr=32.62%, sys=1.86%, ctx=880, majf=0, minf=9 00:20:56.646 IO depths : 1=0.1%, 2=2.6%, 4=10.2%, 8=72.7%, 16=14.4%, 32=0.0%, >=64=0.0% 00:20:56.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.646 complete : 0=0.0%, 4=89.8%, 8=8.0%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.646 issued rwts: total=1947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.646 filename0: (groupid=0, jobs=1): err= 0: pid=83786: Fri Jul 12 12:45:20 2024 00:20:56.646 read: IOPS=204, BW=820KiB/s (840kB/s)(8236KiB/10044msec) 00:20:56.646 slat (usec): min=7, max=4023, avg=15.95, stdev=88.46 00:20:56.646 clat (msec): min=12, max=144, avg=77.90, stdev=21.62 00:20:56.646 lat (msec): min=12, max=144, avg=77.92, stdev=21.62 00:20:56.646 clat percentiles (msec): 00:20:56.646 | 1.00th=[ 31], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 58], 00:20:56.646 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:20:56.646 | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 107], 95.00th=[ 110], 00:20:56.646 | 99.00th=[ 121], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:20:56.646 | 99.99th=[ 144] 00:20:56.646 bw ( KiB/s): min= 640, max= 968, per=4.30%, avg=817.20, stdev=110.07, samples=20 00:20:56.646 iops : min= 160, max= 242, avg=204.30, stdev=27.52, samples=20 00:20:56.646 lat (msec) : 20=0.78%, 50=13.65%, 100=67.95%, 250=17.63% 00:20:56.646 cpu : usr=34.91%, sys=1.96%, ctx=935, majf=0, minf=9 00:20:56.646 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=80.6%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:56.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.646 complete : 0=0.0%, 4=87.9%, 8=11.5%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.646 issued rwts: total=2059,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.646 filename0: (groupid=0, jobs=1): err= 0: pid=83787: Fri Jul 12 12:45:20 2024 00:20:56.646 read: IOPS=202, BW=810KiB/s (829kB/s)(8140KiB/10051msec) 00:20:56.646 slat (usec): min=4, max=8029, avg=28.06, stdev=321.11 00:20:56.646 clat (msec): min=3, max=144, avg=78.80, stdev=22.86 00:20:56.646 lat (msec): min=3, max=144, avg=78.83, stdev=22.86 00:20:56.646 clat percentiles (msec): 00:20:56.646 | 1.00th=[ 11], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 60], 00:20:56.646 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:20:56.646 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 109], 00:20:56.646 | 99.00th=[ 132], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:20:56.646 | 99.99th=[ 146] 00:20:56.646 bw ( KiB/s): min= 640, max= 1000, per=4.24%, avg=806.95, stdev=112.79, samples=20 00:20:56.646 iops : min= 160, max= 250, avg=201.70, stdev=28.23, samples=20 00:20:56.646 lat (msec) : 4=0.69%, 20=0.88%, 50=12.24%, 100=67.91%, 250=18.28% 00:20:56.646 cpu : usr=32.46%, sys=2.04%, ctx=879, majf=0, minf=9 00:20:56.646 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=79.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:56.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.646 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.646 issued rwts: total=2035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.646 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.646 filename0: (groupid=0, jobs=1): err= 0: pid=83788: Fri Jul 12 12:45:20 2024 00:20:56.646 read: IOPS=188, BW=755KiB/s (774kB/s)(7580KiB/10034msec) 00:20:56.646 slat (nsec): min=4067, max=34513, avg=13883.82, stdev=4507.01 00:20:56.646 clat (msec): min=34, max=182, avg=84.58, stdev=24.57 00:20:56.646 lat (msec): min=34, max=182, avg=84.59, stdev=24.57 00:20:56.646 clat percentiles (msec): 00:20:56.646 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 63], 00:20:56.646 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 93], 00:20:56.646 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 109], 95.00th=[ 121], 00:20:56.647 | 99.00th=[ 146], 99.50th=[ 171], 99.90th=[ 184], 99.95th=[ 184], 00:20:56.647 | 99.99th=[ 184] 00:20:56.647 bw ( KiB/s): min= 496, max= 1024, per=3.95%, avg=751.30, stdev=166.69, samples=20 00:20:56.647 iops : min= 124, max= 256, avg=187.80, stdev=41.69, samples=20 00:20:56.647 lat (msec) : 50=11.19%, 100=62.16%, 250=26.65% 00:20:56.647 cpu : usr=31.52%, sys=2.10%, ctx=970, majf=0, minf=9 00:20:56.647 IO depths : 1=0.1%, 2=1.9%, 4=7.5%, 8=75.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:56.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.647 complete : 0=0.0%, 4=89.6%, 8=8.7%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.647 issued rwts: total=1895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.647 filename0: (groupid=0, jobs=1): err= 0: pid=83789: Fri Jul 12 12:45:20 2024 00:20:56.647 read: IOPS=198, BW=795KiB/s (814kB/s)(7952KiB/10007msec) 00:20:56.647 slat (usec): min=4, max=8027, avg=22.35, stdev=254.12 00:20:56.647 clat (msec): min=13, max=199, avg=80.39, stdev=25.07 00:20:56.647 lat (msec): min=13, max=199, avg=80.41, stdev=25.07 00:20:56.647 clat percentiles (msec): 00:20:56.647 | 1.00th=[ 30], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 57], 00:20:56.647 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 86], 00:20:56.647 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 108], 95.00th=[ 121], 00:20:56.647 | 99.00th=[ 148], 99.50th=[ 148], 99.90th=[ 199], 99.95th=[ 201], 00:20:56.647 | 99.99th=[ 201] 00:20:56.647 bw ( KiB/s): min= 512, max= 1048, per=4.10%, avg=778.11, stdev=185.65, samples=19 00:20:56.647 iops : min= 128, max= 262, avg=194.53, stdev=46.41, samples=19 00:20:56.647 lat (msec) : 20=0.30%, 50=17.66%, 100=58.95%, 250=23.09% 00:20:56.647 cpu : usr=33.33%, sys=2.08%, ctx=1076, majf=0, minf=9 00:20:56.647 IO depths : 1=0.1%, 2=2.0%, 4=8.0%, 8=75.2%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:56.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.647 complete : 0=0.0%, 4=89.1%, 8=9.1%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.647 issued rwts: total=1988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.647 filename1: (groupid=0, jobs=1): err= 0: pid=83790: Fri Jul 12 12:45:20 2024 00:20:56.647 read: IOPS=192, BW=770KiB/s (789kB/s)(7744KiB/10052msec) 00:20:56.647 slat (usec): min=4, max=8023, avg=21.41, stdev=257.42 00:20:56.647 clat (usec): min=901, max=155949, avg=82927.07, stdev=26520.86 00:20:56.647 lat (usec): min=909, max=155967, avg=82948.48, stdev=26516.99 00:20:56.647 clat percentiles (msec): 00:20:56.647 | 1.00th=[ 4], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 65], 00:20:56.647 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 84], 60.00th=[ 96], 00:20:56.647 | 70.00th=[ 100], 80.00th=[ 107], 90.00th=[ 108], 95.00th=[ 117], 00:20:56.647 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:20:56.647 | 99.99th=[ 157] 00:20:56.647 bw ( KiB/s): min= 524, max= 1253, per=4.04%, avg=767.25, stdev=178.69, samples=20 00:20:56.647 iops : min= 131, max= 313, avg=191.80, stdev=44.64, samples=20 00:20:56.647 lat (usec) : 1000=0.10% 00:20:56.647 lat (msec) : 4=1.45%, 10=1.76%, 20=0.72%, 50=7.02%, 100=60.33% 00:20:56.647 lat (msec) : 250=28.62% 00:20:56.647 cpu : usr=35.37%, sys=2.21%, ctx=1189, majf=0, minf=9 00:20:56.647 IO depths : 1=0.1%, 2=2.4%, 4=9.4%, 8=72.6%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:56.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.647 complete : 0=0.0%, 4=90.5%, 8=7.5%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.647 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.647 filename1: (groupid=0, jobs=1): err= 0: pid=83791: Fri Jul 12 12:45:20 2024 00:20:56.647 read: IOPS=204, BW=818KiB/s (837kB/s)(8192KiB/10019msec) 00:20:56.647 slat (usec): min=8, max=8024, avg=32.23, stdev=334.59 00:20:56.647 clat (msec): min=26, max=159, avg=78.11, stdev=21.86 00:20:56.647 lat (msec): min=26, max=159, avg=78.14, stdev=21.87 00:20:56.647 clat percentiles (msec): 00:20:56.647 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:20:56.647 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 83], 00:20:56.647 | 70.00th=[ 92], 80.00th=[ 100], 90.00th=[ 107], 95.00th=[ 111], 00:20:56.647 | 99.00th=[ 132], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 161], 00:20:56.647 | 99.99th=[ 161] 00:20:56.647 bw ( KiB/s): min= 512, max= 1000, per=4.29%, avg=814.00, stdev=146.11, samples=20 00:20:56.647 iops : min= 128, max= 250, avg=203.50, stdev=36.53, samples=20 00:20:56.647 lat (msec) : 50=13.48%, 100=68.31%, 250=18.21% 00:20:56.647 cpu : usr=41.57%, sys=2.79%, ctx=1430, majf=0, minf=9 00:20:56.647 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.9%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:56.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.647 complete : 0=0.0%, 4=87.9%, 8=11.3%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.647 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.647 filename1: (groupid=0, jobs=1): err= 0: pid=83792: Fri Jul 12 12:45:20 2024 00:20:56.647 read: IOPS=196, BW=787KiB/s (805kB/s)(7900KiB/10044msec) 00:20:56.647 slat (usec): min=8, max=8032, avg=20.10, stdev=201.71 00:20:56.647 clat (msec): min=24, max=154, avg=81.17, stdev=21.30 00:20:56.647 lat (msec): min=24, max=154, avg=81.19, stdev=21.30 00:20:56.647 clat percentiles (msec): 00:20:56.647 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:20:56.647 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 85], 00:20:56.647 | 70.00th=[ 96], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 112], 00:20:56.647 | 99.00th=[ 130], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:20:56.647 | 99.99th=[ 155] 00:20:56.647 bw ( KiB/s): min= 592, max= 976, per=4.12%, avg=783.60, stdev=124.65, samples=20 00:20:56.647 iops : min= 148, max= 244, avg=195.90, stdev=31.16, samples=20 00:20:56.647 lat (msec) : 50=10.99%, 100=69.37%, 250=19.65% 00:20:56.647 cpu : usr=36.20%, sys=2.59%, ctx=1102, majf=0, minf=9 00:20:56.647 IO depths : 1=0.1%, 2=1.3%, 4=5.0%, 8=77.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:56.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.647 complete : 0=0.0%, 4=88.8%, 8=10.1%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.647 issued rwts: total=1975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.647 filename1: (groupid=0, jobs=1): err= 0: pid=83793: Fri Jul 12 12:45:20 2024 00:20:56.647 read: IOPS=200, BW=803KiB/s (823kB/s)(8044KiB/10012msec) 00:20:56.647 slat (usec): min=3, max=4026, avg=17.56, stdev=112.17 00:20:56.647 clat (msec): min=18, max=155, avg=79.54, stdev=25.48 00:20:56.647 lat (msec): min=19, max=155, avg=79.56, stdev=25.48 00:20:56.647 clat percentiles (msec): 00:20:56.647 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 54], 00:20:56.647 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:20:56.647 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 112], 95.00th=[ 121], 00:20:56.647 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 155], 99.95th=[ 157], 00:20:56.647 | 99.99th=[ 157] 00:20:56.647 bw ( KiB/s): min= 496, max= 1024, per=4.21%, avg=800.65, stdev=184.89, samples=20 00:20:56.647 iops : min= 124, max= 256, avg=200.15, stdev=46.23, samples=20 00:20:56.647 lat (msec) : 20=0.30%, 50=15.91%, 100=61.46%, 250=22.33% 00:20:56.647 cpu : usr=37.22%, sys=2.21%, ctx=1156, majf=0, minf=9 00:20:56.647 IO depths : 1=0.1%, 2=1.2%, 4=4.3%, 8=79.1%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:56.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.647 complete : 0=0.0%, 4=88.1%, 8=11.0%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.647 issued rwts: total=2011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.647 filename1: (groupid=0, jobs=1): err= 0: pid=83794: Fri Jul 12 12:45:20 2024 00:20:56.647 read: IOPS=201, BW=807KiB/s (827kB/s)(8108KiB/10043msec) 00:20:56.647 slat (usec): min=3, max=8025, avg=33.77, stdev=377.03 00:20:56.647 clat (msec): min=15, max=152, avg=79.05, stdev=22.74 00:20:56.647 lat (msec): min=15, max=152, avg=79.08, stdev=22.75 00:20:56.647 clat percentiles (msec): 00:20:56.647 | 1.00th=[ 32], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:20:56.647 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 83], 00:20:56.647 | 70.00th=[ 95], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 111], 00:20:56.647 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 153], 99.95th=[ 153], 00:20:56.647 | 99.99th=[ 153] 00:20:56.647 bw ( KiB/s): min= 528, max= 976, per=4.23%, avg=804.45, stdev=118.02, samples=20 00:20:56.647 iops : min= 132, max= 244, avg=201.10, stdev=29.50, samples=20 00:20:56.647 lat (msec) : 20=0.69%, 50=12.93%, 100=66.45%, 250=19.93% 00:20:56.647 cpu : usr=37.07%, sys=2.06%, ctx=1067, majf=0, minf=9 00:20:56.647 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=79.9%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:56.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.647 complete : 0=0.0%, 4=88.1%, 8=11.1%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.647 issued rwts: total=2027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.647 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.647 filename1: (groupid=0, jobs=1): err= 0: pid=83795: Fri Jul 12 12:45:20 2024 00:20:56.647 read: IOPS=207, BW=830KiB/s (850kB/s)(8308KiB/10004msec) 00:20:56.647 slat (usec): min=4, max=8030, avg=18.29, stdev=175.96 00:20:56.647 clat (usec): min=1769, max=216237, avg=76968.57, stdev=25804.88 00:20:56.647 lat (usec): min=1777, max=216250, avg=76986.85, stdev=25803.86 00:20:56.647 clat percentiles (msec): 00:20:56.648 | 1.00th=[ 7], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 54], 00:20:56.648 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 83], 00:20:56.648 | 70.00th=[ 93], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 110], 00:20:56.648 | 99.00th=[ 132], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 218], 00:20:56.648 | 99.99th=[ 218] 00:20:56.648 bw ( KiB/s): min= 400, max= 1000, per=4.23%, avg=804.21, stdev=163.37, samples=19 00:20:56.648 iops : min= 100, max= 250, avg=201.05, stdev=40.84, samples=19 00:20:56.648 lat (msec) : 2=0.14%, 4=0.34%, 10=0.77%, 20=0.63%, 50=14.68% 00:20:56.648 lat (msec) : 100=64.66%, 250=18.78% 00:20:56.648 cpu : usr=39.92%, sys=2.30%, ctx=1320, majf=0, minf=9 00:20:56.648 IO depths : 1=0.1%, 2=1.2%, 4=4.4%, 8=79.2%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:56.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.648 complete : 0=0.0%, 4=88.0%, 8=11.0%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.648 issued rwts: total=2077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.648 filename1: (groupid=0, jobs=1): err= 0: pid=83796: Fri Jul 12 12:45:20 2024 00:20:56.648 read: IOPS=207, BW=829KiB/s (849kB/s)(8304KiB/10015msec) 00:20:56.648 slat (usec): min=4, max=8028, avg=19.67, stdev=196.70 00:20:56.648 clat (msec): min=19, max=232, avg=77.05, stdev=25.13 00:20:56.648 lat (msec): min=19, max=232, avg=77.07, stdev=25.13 00:20:56.648 clat percentiles (msec): 00:20:56.648 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 53], 00:20:56.648 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 82], 00:20:56.648 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 110], 00:20:56.648 | 99.00th=[ 142], 99.50th=[ 211], 99.90th=[ 211], 99.95th=[ 232], 00:20:56.648 | 99.99th=[ 232] 00:20:56.648 bw ( KiB/s): min= 496, max= 1056, per=4.35%, avg=826.45, stdev=157.95, samples=20 00:20:56.648 iops : min= 124, max= 264, avg=206.60, stdev=39.49, samples=20 00:20:56.648 lat (msec) : 20=0.29%, 50=17.87%, 100=64.31%, 250=17.53% 00:20:56.648 cpu : usr=34.10%, sys=2.14%, ctx=994, majf=0, minf=9 00:20:56.648 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.6%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:56.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.648 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.648 issued rwts: total=2076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.648 filename1: (groupid=0, jobs=1): err= 0: pid=83797: Fri Jul 12 12:45:20 2024 00:20:56.648 read: IOPS=190, BW=763KiB/s (781kB/s)(7652KiB/10028msec) 00:20:56.648 slat (usec): min=3, max=4026, avg=22.37, stdev=169.45 00:20:56.648 clat (msec): min=36, max=168, avg=83.67, stdev=23.78 00:20:56.648 lat (msec): min=36, max=168, avg=83.70, stdev=23.78 00:20:56.648 clat percentiles (msec): 00:20:56.648 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 60], 00:20:56.648 | 30.00th=[ 71], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 96], 00:20:56.648 | 70.00th=[ 97], 80.00th=[ 106], 90.00th=[ 110], 95.00th=[ 120], 00:20:56.648 | 99.00th=[ 148], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:20:56.648 | 99.99th=[ 169] 00:20:56.648 bw ( KiB/s): min= 496, max= 976, per=4.01%, avg=761.20, stdev=177.30, samples=20 00:20:56.648 iops : min= 124, max= 244, avg=190.30, stdev=44.33, samples=20 00:20:56.648 lat (msec) : 50=11.24%, 100=61.27%, 250=27.50% 00:20:56.648 cpu : usr=39.41%, sys=2.83%, ctx=1275, majf=0, minf=9 00:20:56.648 IO depths : 1=0.1%, 2=2.8%, 4=11.2%, 8=71.5%, 16=14.4%, 32=0.0%, >=64=0.0% 00:20:56.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.648 complete : 0=0.0%, 4=90.2%, 8=7.4%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.648 issued rwts: total=1913,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.648 filename2: (groupid=0, jobs=1): err= 0: pid=83798: Fri Jul 12 12:45:20 2024 00:20:56.648 read: IOPS=213, BW=853KiB/s (874kB/s)(8532KiB/10002msec) 00:20:56.648 slat (usec): min=3, max=8028, avg=24.31, stdev=252.37 00:20:56.648 clat (msec): min=2, max=215, avg=74.90, stdev=24.51 00:20:56.648 lat (msec): min=2, max=215, avg=74.93, stdev=24.50 00:20:56.648 clat percentiles (msec): 00:20:56.648 | 1.00th=[ 12], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 52], 00:20:56.648 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 74], 60.00th=[ 81], 00:20:56.648 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 105], 95.00th=[ 108], 00:20:56.648 | 99.00th=[ 122], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 215], 00:20:56.648 | 99.99th=[ 215] 00:20:56.648 bw ( KiB/s): min= 512, max= 1072, per=4.37%, avg=829.47, stdev=146.11, samples=19 00:20:56.648 iops : min= 128, max= 268, avg=207.37, stdev=36.53, samples=19 00:20:56.648 lat (msec) : 4=0.33%, 10=0.56%, 20=0.61%, 50=14.86%, 100=68.26% 00:20:56.648 lat (msec) : 250=15.38% 00:20:56.648 cpu : usr=39.69%, sys=2.84%, ctx=1380, majf=0, minf=9 00:20:56.648 IO depths : 1=0.1%, 2=0.6%, 4=2.4%, 8=81.6%, 16=15.3%, 32=0.0%, >=64=0.0% 00:20:56.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.648 complete : 0=0.0%, 4=87.2%, 8=12.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.648 issued rwts: total=2133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.648 filename2: (groupid=0, jobs=1): err= 0: pid=83799: Fri Jul 12 12:45:20 2024 00:20:56.648 read: IOPS=191, BW=765KiB/s (784kB/s)(7672KiB/10024msec) 00:20:56.648 slat (usec): min=3, max=8029, avg=23.14, stdev=258.72 00:20:56.648 clat (msec): min=31, max=167, avg=83.41, stdev=22.86 00:20:56.648 lat (msec): min=31, max=167, avg=83.43, stdev=22.87 00:20:56.648 clat percentiles (msec): 00:20:56.648 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 63], 00:20:56.648 | 30.00th=[ 71], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 92], 00:20:56.648 | 70.00th=[ 97], 80.00th=[ 106], 90.00th=[ 110], 95.00th=[ 112], 00:20:56.648 | 99.00th=[ 142], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 167], 00:20:56.648 | 99.99th=[ 167] 00:20:56.648 bw ( KiB/s): min= 510, max= 976, per=4.00%, avg=760.70, stdev=163.27, samples=20 00:20:56.648 iops : min= 127, max= 244, avg=190.15, stdev=40.86, samples=20 00:20:56.648 lat (msec) : 50=10.64%, 100=63.24%, 250=26.12% 00:20:56.648 cpu : usr=38.28%, sys=2.44%, ctx=1448, majf=0, minf=9 00:20:56.648 IO depths : 1=0.1%, 2=2.7%, 4=10.8%, 8=71.7%, 16=14.7%, 32=0.0%, >=64=0.0% 00:20:56.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.648 complete : 0=0.0%, 4=90.3%, 8=7.4%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.648 issued rwts: total=1918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.648 filename2: (groupid=0, jobs=1): err= 0: pid=83800: Fri Jul 12 12:45:20 2024 00:20:56.648 read: IOPS=185, BW=743KiB/s (760kB/s)(7456KiB/10041msec) 00:20:56.648 slat (usec): min=4, max=4023, avg=16.50, stdev=92.98 00:20:56.648 clat (msec): min=25, max=156, avg=86.00, stdev=23.62 00:20:56.648 lat (msec): min=25, max=156, avg=86.02, stdev=23.62 00:20:56.648 clat percentiles (msec): 00:20:56.648 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 69], 00:20:56.648 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 96], 00:20:56.648 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 112], 95.00th=[ 121], 00:20:56.648 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:20:56.648 | 99.99th=[ 157] 00:20:56.648 bw ( KiB/s): min= 512, max= 976, per=3.89%, avg=739.20, stdev=153.67, samples=20 00:20:56.648 iops : min= 128, max= 244, avg=184.80, stdev=38.42, samples=20 00:20:56.648 lat (msec) : 50=9.17%, 100=61.16%, 250=29.67% 00:20:56.648 cpu : usr=36.49%, sys=2.43%, ctx=1044, majf=0, minf=9 00:20:56.648 IO depths : 1=0.1%, 2=3.2%, 4=12.6%, 8=69.7%, 16=14.4%, 32=0.0%, >=64=0.0% 00:20:56.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.648 complete : 0=0.0%, 4=90.8%, 8=6.4%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.648 issued rwts: total=1864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.648 filename2: (groupid=0, jobs=1): err= 0: pid=83801: Fri Jul 12 12:45:20 2024 00:20:56.648 read: IOPS=202, BW=810KiB/s (829kB/s)(8116KiB/10025msec) 00:20:56.648 slat (usec): min=3, max=8019, avg=20.77, stdev=198.81 00:20:56.648 clat (msec): min=35, max=160, avg=78.87, stdev=21.77 00:20:56.648 lat (msec): min=35, max=160, avg=78.89, stdev=21.77 00:20:56.648 clat percentiles (msec): 00:20:56.648 | 1.00th=[ 42], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 56], 00:20:56.648 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 83], 00:20:56.648 | 70.00th=[ 94], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 111], 00:20:56.648 | 99.00th=[ 131], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 161], 00:20:56.648 | 99.99th=[ 161] 00:20:56.648 bw ( KiB/s): min= 544, max= 1048, per=4.25%, avg=807.70, stdev=144.17, samples=20 00:20:56.648 iops : min= 136, max= 262, avg=201.90, stdev=36.06, samples=20 00:20:56.648 lat (msec) : 50=12.62%, 100=67.77%, 250=19.62% 00:20:56.648 cpu : usr=41.50%, sys=2.74%, ctx=1204, majf=0, minf=9 00:20:56.648 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.6%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:56.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.648 complete : 0=0.0%, 4=88.0%, 8=11.1%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.648 issued rwts: total=2029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.648 filename2: (groupid=0, jobs=1): err= 0: pid=83802: Fri Jul 12 12:45:20 2024 00:20:56.648 read: IOPS=201, BW=804KiB/s (824kB/s)(8084KiB/10049msec) 00:20:56.648 slat (usec): min=4, max=8025, avg=25.74, stdev=281.64 00:20:56.648 clat (msec): min=15, max=143, avg=79.35, stdev=21.71 00:20:56.648 lat (msec): min=15, max=143, avg=79.38, stdev=21.72 00:20:56.648 clat percentiles (msec): 00:20:56.648 | 1.00th=[ 31], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 61], 00:20:56.648 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:20:56.648 | 70.00th=[ 96], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 109], 00:20:56.648 | 99.00th=[ 121], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:20:56.648 | 99.99th=[ 144] 00:20:56.648 bw ( KiB/s): min= 640, max= 968, per=4.22%, avg=801.65, stdev=115.13, samples=20 00:20:56.648 iops : min= 160, max= 242, avg=200.35, stdev=28.80, samples=20 00:20:56.648 lat (msec) : 20=0.79%, 50=13.41%, 100=67.05%, 250=18.75% 00:20:56.648 cpu : usr=32.84%, sys=2.24%, ctx=942, majf=0, minf=9 00:20:56.648 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=79.7%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:56.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.648 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.648 issued rwts: total=2021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.648 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.648 filename2: (groupid=0, jobs=1): err= 0: pid=83803: Fri Jul 12 12:45:20 2024 00:20:56.648 read: IOPS=182, BW=731KiB/s (749kB/s)(7316KiB/10004msec) 00:20:56.648 slat (usec): min=5, max=8032, avg=22.06, stdev=249.23 00:20:56.648 clat (msec): min=4, max=216, avg=87.34, stdev=26.55 00:20:56.648 lat (msec): min=4, max=216, avg=87.36, stdev=26.54 00:20:56.649 clat percentiles (msec): 00:20:56.649 | 1.00th=[ 15], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 70], 00:20:56.649 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 88], 60.00th=[ 96], 00:20:56.649 | 70.00th=[ 104], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 128], 00:20:56.649 | 99.00th=[ 157], 99.50th=[ 194], 99.90th=[ 218], 99.95th=[ 218], 00:20:56.649 | 99.99th=[ 218] 00:20:56.649 bw ( KiB/s): min= 496, max= 944, per=3.71%, avg=705.68, stdev=152.04, samples=19 00:20:56.649 iops : min= 124, max= 236, avg=176.42, stdev=38.01, samples=19 00:20:56.649 lat (msec) : 10=0.71%, 20=0.49%, 50=8.91%, 100=56.42%, 250=33.46% 00:20:56.649 cpu : usr=35.84%, sys=2.61%, ctx=1013, majf=0, minf=9 00:20:56.649 IO depths : 1=0.1%, 2=4.1%, 4=16.1%, 8=66.0%, 16=13.6%, 32=0.0%, >=64=0.0% 00:20:56.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.649 complete : 0=0.0%, 4=91.6%, 8=4.8%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.649 issued rwts: total=1829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.649 filename2: (groupid=0, jobs=1): err= 0: pid=83804: Fri Jul 12 12:45:20 2024 00:20:56.649 read: IOPS=188, BW=754KiB/s (772kB/s)(7576KiB/10051msec) 00:20:56.649 slat (usec): min=6, max=4021, avg=17.54, stdev=130.28 00:20:56.649 clat (msec): min=6, max=153, avg=84.73, stdev=23.52 00:20:56.649 lat (msec): min=6, max=153, avg=84.75, stdev=23.52 00:20:56.649 clat percentiles (msec): 00:20:56.649 | 1.00th=[ 10], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 67], 00:20:56.649 | 30.00th=[ 75], 40.00th=[ 81], 50.00th=[ 88], 60.00th=[ 96], 00:20:56.649 | 70.00th=[ 99], 80.00th=[ 105], 90.00th=[ 109], 95.00th=[ 113], 00:20:56.649 | 99.00th=[ 131], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:20:56.649 | 99.99th=[ 155] 00:20:56.649 bw ( KiB/s): min= 528, max= 1126, per=3.95%, avg=750.50, stdev=156.46, samples=20 00:20:56.649 iops : min= 132, max= 281, avg=187.60, stdev=39.05, samples=20 00:20:56.649 lat (msec) : 10=1.58%, 20=0.95%, 50=4.75%, 100=64.26%, 250=28.46% 00:20:56.649 cpu : usr=44.82%, sys=3.03%, ctx=1527, majf=0, minf=9 00:20:56.649 IO depths : 1=0.1%, 2=3.4%, 4=13.5%, 8=68.5%, 16=14.5%, 32=0.0%, >=64=0.0% 00:20:56.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.649 complete : 0=0.0%, 4=91.2%, 8=5.8%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.649 issued rwts: total=1894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.649 filename2: (groupid=0, jobs=1): err= 0: pid=83805: Fri Jul 12 12:45:20 2024 00:20:56.649 read: IOPS=204, BW=817KiB/s (836kB/s)(8192KiB/10029msec) 00:20:56.649 slat (usec): min=4, max=4029, avg=18.56, stdev=121.78 00:20:56.649 clat (msec): min=29, max=176, avg=78.20, stdev=22.95 00:20:56.649 lat (msec): min=29, max=176, avg=78.22, stdev=22.95 00:20:56.649 clat percentiles (msec): 00:20:56.649 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 55], 00:20:56.649 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 83], 00:20:56.649 | 70.00th=[ 92], 80.00th=[ 101], 90.00th=[ 107], 95.00th=[ 111], 00:20:56.649 | 99.00th=[ 136], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 178], 00:20:56.649 | 99.99th=[ 178] 00:20:56.649 bw ( KiB/s): min= 496, max= 1024, per=4.28%, avg=812.85, stdev=154.24, samples=20 00:20:56.649 iops : min= 124, max= 256, avg=203.20, stdev=38.57, samples=20 00:20:56.649 lat (msec) : 50=14.60%, 100=66.46%, 250=18.95% 00:20:56.649 cpu : usr=40.99%, sys=2.50%, ctx=1383, majf=0, minf=9 00:20:56.649 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=80.3%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:56.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.649 complete : 0=0.0%, 4=87.8%, 8=11.5%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.649 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.649 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:56.649 00:20:56.649 Run status group 0 (all jobs): 00:20:56.649 READ: bw=18.5MiB/s (19.4MB/s), 731KiB/s-853KiB/s (749kB/s-874kB/s), io=186MiB (195MB), run=10002-10052msec 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.649 bdev_null0 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.649 [2024-07-12 12:45:21.133181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.649 bdev_null1 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:56.649 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:56.650 { 00:20:56.650 "params": { 00:20:56.650 "name": "Nvme$subsystem", 00:20:56.650 "trtype": "$TEST_TRANSPORT", 00:20:56.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.650 "adrfam": "ipv4", 00:20:56.650 "trsvcid": "$NVMF_PORT", 00:20:56.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.650 "hdgst": ${hdgst:-false}, 00:20:56.650 "ddgst": ${ddgst:-false} 00:20:56.650 }, 00:20:56.650 "method": "bdev_nvme_attach_controller" 00:20:56.650 } 00:20:56.650 EOF 00:20:56.650 )") 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:56.650 { 00:20:56.650 "params": { 00:20:56.650 "name": "Nvme$subsystem", 00:20:56.650 "trtype": "$TEST_TRANSPORT", 00:20:56.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:56.650 "adrfam": "ipv4", 00:20:56.650 "trsvcid": "$NVMF_PORT", 00:20:56.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:56.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:56.650 "hdgst": ${hdgst:-false}, 00:20:56.650 "ddgst": ${ddgst:-false} 00:20:56.650 }, 00:20:56.650 "method": "bdev_nvme_attach_controller" 00:20:56.650 } 00:20:56.650 EOF 00:20:56.650 )") 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:56.650 "params": { 00:20:56.650 "name": "Nvme0", 00:20:56.650 "trtype": "tcp", 00:20:56.650 "traddr": "10.0.0.2", 00:20:56.650 "adrfam": "ipv4", 00:20:56.650 "trsvcid": "4420", 00:20:56.650 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:56.650 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:56.650 "hdgst": false, 00:20:56.650 "ddgst": false 00:20:56.650 }, 00:20:56.650 "method": "bdev_nvme_attach_controller" 00:20:56.650 },{ 00:20:56.650 "params": { 00:20:56.650 "name": "Nvme1", 00:20:56.650 "trtype": "tcp", 00:20:56.650 "traddr": "10.0.0.2", 00:20:56.650 "adrfam": "ipv4", 00:20:56.650 "trsvcid": "4420", 00:20:56.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.650 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:56.650 "hdgst": false, 00:20:56.650 "ddgst": false 00:20:56.650 }, 00:20:56.650 "method": "bdev_nvme_attach_controller" 00:20:56.650 }' 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:56.650 12:45:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:56.650 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:56.650 ... 00:20:56.650 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:56.650 ... 00:20:56.650 fio-3.35 00:20:56.650 Starting 4 threads 00:21:01.910 00:21:01.910 filename0: (groupid=0, jobs=1): err= 0: pid=83944: Fri Jul 12 12:45:26 2024 00:21:01.910 read: IOPS=1984, BW=15.5MiB/s (16.3MB/s)(77.6MiB/5002msec) 00:21:01.910 slat (nsec): min=7790, max=41509, avg=14989.41, stdev=2562.32 00:21:01.910 clat (usec): min=1144, max=6186, avg=3975.21, stdev=559.25 00:21:01.910 lat (usec): min=1158, max=6201, avg=3990.20, stdev=559.27 00:21:01.910 clat percentiles (usec): 00:21:01.910 | 1.00th=[ 2278], 5.00th=[ 2638], 10.00th=[ 3458], 20.00th=[ 3851], 00:21:01.910 | 30.00th=[ 3884], 40.00th=[ 3884], 50.00th=[ 3916], 60.00th=[ 3949], 00:21:01.910 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 4948], 00:21:01.910 | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 5932], 99.95th=[ 6128], 00:21:01.910 | 99.99th=[ 6194] 00:21:01.910 bw ( KiB/s): min=14592, max=17952, per=24.40%, avg=15943.11, stdev=1009.75, samples=9 00:21:01.910 iops : min= 1824, max= 2244, avg=1992.89, stdev=126.22, samples=9 00:21:01.910 lat (msec) : 2=0.63%, 4=68.03%, 10=31.34% 00:21:01.910 cpu : usr=91.72%, sys=7.48%, ctx=7, majf=0, minf=10 00:21:01.910 IO depths : 1=0.1%, 2=21.0%, 4=53.2%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.910 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.910 issued rwts: total=9928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.910 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:01.910 filename0: (groupid=0, jobs=1): err= 0: pid=83945: Fri Jul 12 12:45:26 2024 00:21:01.910 read: IOPS=2200, BW=17.2MiB/s (18.0MB/s)(86.0MiB/5002msec) 00:21:01.910 slat (nsec): min=7536, max=44654, avg=12317.57, stdev=3503.63 00:21:01.910 clat (usec): min=648, max=7038, avg=3595.00, stdev=786.22 00:21:01.910 lat (usec): min=657, max=7052, avg=3607.32, stdev=786.87 00:21:01.910 clat percentiles (usec): 00:21:01.910 | 1.00th=[ 1319], 5.00th=[ 1483], 10.00th=[ 2474], 20.00th=[ 3261], 00:21:01.910 | 30.00th=[ 3752], 40.00th=[ 3851], 50.00th=[ 3884], 60.00th=[ 3916], 00:21:01.910 | 70.00th=[ 3949], 80.00th=[ 3982], 90.00th=[ 4228], 95.00th=[ 4490], 00:21:01.910 | 99.00th=[ 4883], 99.50th=[ 4948], 99.90th=[ 5669], 99.95th=[ 6063], 00:21:01.910 | 99.99th=[ 6194] 00:21:01.910 bw ( KiB/s): min=16256, max=20912, per=26.99%, avg=17630.56, stdev=1425.77, samples=9 00:21:01.910 iops : min= 2032, max= 2614, avg=2203.78, stdev=178.22, samples=9 00:21:01.910 lat (usec) : 750=0.16%, 1000=0.04% 00:21:01.910 lat (msec) : 2=7.30%, 4=75.49%, 10=17.02% 00:21:01.910 cpu : usr=91.96%, sys=7.16%, ctx=6, majf=0, minf=0 00:21:01.910 IO depths : 1=0.1%, 2=13.7%, 4=57.8%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.910 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.910 issued rwts: total=11006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.910 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:01.910 filename1: (groupid=0, jobs=1): err= 0: pid=83946: Fri Jul 12 12:45:26 2024 00:21:01.910 read: IOPS=1984, BW=15.5MiB/s (16.3MB/s)(77.5MiB/5001msec) 00:21:01.910 slat (nsec): min=7927, max=99441, avg=14978.17, stdev=2778.73 00:21:01.910 clat (usec): min=1142, max=6182, avg=3975.24, stdev=558.10 00:21:01.910 lat (usec): min=1156, max=6199, avg=3990.22, stdev=558.15 00:21:01.910 clat percentiles (usec): 00:21:01.910 | 1.00th=[ 2278], 5.00th=[ 2638], 10.00th=[ 3490], 20.00th=[ 3851], 00:21:01.910 | 30.00th=[ 3884], 40.00th=[ 3884], 50.00th=[ 3916], 60.00th=[ 3949], 00:21:01.910 | 70.00th=[ 4080], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 4883], 00:21:01.910 | 99.00th=[ 5407], 99.50th=[ 5473], 99.90th=[ 5932], 99.95th=[ 6128], 00:21:01.910 | 99.99th=[ 6194] 00:21:01.910 bw ( KiB/s): min=14592, max=17920, per=24.40%, avg=15943.33, stdev=1006.33, samples=9 00:21:01.910 iops : min= 1824, max= 2240, avg=1992.89, stdev=125.76, samples=9 00:21:01.910 lat (msec) : 2=0.63%, 4=67.91%, 10=31.45% 00:21:01.910 cpu : usr=91.94%, sys=7.22%, ctx=8, majf=0, minf=0 00:21:01.910 IO depths : 1=0.1%, 2=21.0%, 4=53.2%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.910 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.910 issued rwts: total=9926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.910 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:01.910 filename1: (groupid=0, jobs=1): err= 0: pid=83947: Fri Jul 12 12:45:26 2024 00:21:01.910 read: IOPS=1998, BW=15.6MiB/s (16.4MB/s)(78.1MiB/5003msec) 00:21:01.910 slat (usec): min=7, max=211, avg=14.76, stdev= 5.64 00:21:01.910 clat (usec): min=978, max=6991, avg=3948.66, stdev=797.54 00:21:01.910 lat (usec): min=993, max=7006, avg=3963.41, stdev=797.50 00:21:01.910 clat percentiles (usec): 00:21:01.910 | 1.00th=[ 1844], 5.00th=[ 2245], 10.00th=[ 3032], 20.00th=[ 3818], 00:21:01.910 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3916], 60.00th=[ 3949], 00:21:01.910 | 70.00th=[ 3982], 80.00th=[ 4178], 90.00th=[ 4555], 95.00th=[ 5866], 00:21:01.910 | 99.00th=[ 6194], 99.50th=[ 6259], 99.90th=[ 6521], 99.95th=[ 6587], 00:21:01.910 | 99.99th=[ 6980] 00:21:01.910 bw ( KiB/s): min=12160, max=18368, per=24.33%, avg=15896.89, stdev=1750.16, samples=9 00:21:01.910 iops : min= 1520, max= 2296, avg=1987.11, stdev=218.77, samples=9 00:21:01.910 lat (usec) : 1000=0.04% 00:21:01.911 lat (msec) : 2=2.51%, 4=68.15%, 10=29.30% 00:21:01.911 cpu : usr=90.80%, sys=7.90%, ctx=78, majf=0, minf=9 00:21:01.911 IO depths : 1=0.1%, 2=19.6%, 4=53.5%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:01.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.911 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:01.911 issued rwts: total=9996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:01.911 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:01.911 00:21:01.911 Run status group 0 (all jobs): 00:21:01.911 READ: bw=63.8MiB/s (66.9MB/s), 15.5MiB/s-17.2MiB/s (16.3MB/s-18.0MB/s), io=319MiB (335MB), run=5001-5003msec 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.911 ************************************ 00:21:01.911 END TEST fio_dif_rand_params 00:21:01.911 ************************************ 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.911 00:21:01.911 real 0m23.675s 00:21:01.911 user 2m3.214s 00:21:01.911 sys 0m9.526s 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:01.911 12:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:21:01.911 12:45:27 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:01.911 12:45:27 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:21:01.911 12:45:27 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:01.911 12:45:27 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:01.911 12:45:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:01.911 ************************************ 00:21:01.911 START TEST fio_dif_digest 00:21:01.911 ************************************ 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:01.911 bdev_null0 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:01.911 [2024-07-12 12:45:27.369498] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:01.911 { 00:21:01.911 "params": { 00:21:01.911 "name": "Nvme$subsystem", 00:21:01.911 "trtype": "$TEST_TRANSPORT", 00:21:01.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.911 "adrfam": "ipv4", 00:21:01.911 "trsvcid": "$NVMF_PORT", 00:21:01.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.911 "hdgst": ${hdgst:-false}, 00:21:01.911 "ddgst": ${ddgst:-false} 00:21:01.911 }, 00:21:01.911 "method": "bdev_nvme_attach_controller" 00:21:01.911 } 00:21:01.911 EOF 00:21:01.911 )") 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:01.911 "params": { 00:21:01.911 "name": "Nvme0", 00:21:01.911 "trtype": "tcp", 00:21:01.911 "traddr": "10.0.0.2", 00:21:01.911 "adrfam": "ipv4", 00:21:01.911 "trsvcid": "4420", 00:21:01.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:01.911 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:01.911 "hdgst": true, 00:21:01.911 "ddgst": true 00:21:01.911 }, 00:21:01.911 "method": "bdev_nvme_attach_controller" 00:21:01.911 }' 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:01.911 12:45:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:21:01.911 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:21:01.911 ... 00:21:01.911 fio-3.35 00:21:01.911 Starting 3 threads 00:21:14.187 00:21:14.187 filename0: (groupid=0, jobs=1): err= 0: pid=84053: Fri Jul 12 12:45:38 2024 00:21:14.187 read: IOPS=224, BW=28.1MiB/s (29.4MB/s)(281MiB/10003msec) 00:21:14.187 slat (nsec): min=6691, max=57469, avg=17375.18, stdev=5375.02 00:21:14.187 clat (usec): min=12881, max=16476, avg=13315.13, stdev=350.50 00:21:14.187 lat (usec): min=12895, max=16506, avg=13332.51, stdev=351.03 00:21:14.187 clat percentiles (usec): 00:21:14.187 | 1.00th=[13042], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:21:14.187 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13304], 00:21:14.187 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13435], 95.00th=[13698], 00:21:14.187 | 99.00th=[15401], 99.50th=[16057], 99.90th=[16450], 99.95th=[16450], 00:21:14.187 | 99.99th=[16450] 00:21:14.187 bw ( KiB/s): min=28416, max=29184, per=33.36%, avg=28770.53, stdev=381.59, samples=19 00:21:14.187 iops : min= 222, max= 228, avg=224.74, stdev= 3.00, samples=19 00:21:14.187 lat (msec) : 20=100.00% 00:21:14.187 cpu : usr=91.19%, sys=7.91%, ctx=81, majf=0, minf=0 00:21:14.187 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:14.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.187 issued rwts: total=2247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.187 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:14.187 filename0: (groupid=0, jobs=1): err= 0: pid=84054: Fri Jul 12 12:45:38 2024 00:21:14.187 read: IOPS=224, BW=28.1MiB/s (29.4MB/s)(281MiB/10001msec) 00:21:14.187 slat (nsec): min=7945, max=67025, avg=17117.48, stdev=5099.30 00:21:14.187 clat (usec): min=10785, max=16478, avg=13312.47, stdev=367.42 00:21:14.187 lat (usec): min=10795, max=16518, avg=13329.59, stdev=367.99 00:21:14.187 clat percentiles (usec): 00:21:14.187 | 1.00th=[13042], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:21:14.187 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13304], 00:21:14.187 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13435], 95.00th=[13698], 00:21:14.187 | 99.00th=[15401], 99.50th=[16188], 99.90th=[16450], 99.95th=[16450], 00:21:14.187 | 99.99th=[16450] 00:21:14.187 bw ( KiB/s): min=28416, max=29184, per=33.37%, avg=28779.79, stdev=393.98, samples=19 00:21:14.188 iops : min= 222, max= 228, avg=224.84, stdev= 3.08, samples=19 00:21:14.188 lat (msec) : 20=100.00% 00:21:14.188 cpu : usr=91.01%, sys=8.43%, ctx=18, majf=0, minf=0 00:21:14.188 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:14.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.188 issued rwts: total=2247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:14.188 filename0: (groupid=0, jobs=1): err= 0: pid=84055: Fri Jul 12 12:45:38 2024 00:21:14.188 read: IOPS=224, BW=28.1MiB/s (29.4MB/s)(281MiB/10004msec) 00:21:14.188 slat (nsec): min=7656, max=55894, avg=16429.71, stdev=6047.97 00:21:14.188 clat (usec): min=12822, max=16443, avg=13318.59, stdev=360.62 00:21:14.188 lat (usec): min=12830, max=16480, avg=13335.02, stdev=361.35 00:21:14.188 clat percentiles (usec): 00:21:14.188 | 1.00th=[13042], 5.00th=[13173], 10.00th=[13173], 20.00th=[13173], 00:21:14.188 | 30.00th=[13173], 40.00th=[13173], 50.00th=[13173], 60.00th=[13304], 00:21:14.188 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13435], 95.00th=[13698], 00:21:14.188 | 99.00th=[15401], 99.50th=[16188], 99.90th=[16450], 99.95th=[16450], 00:21:14.188 | 99.99th=[16450] 00:21:14.188 bw ( KiB/s): min=28416, max=29184, per=33.35%, avg=28767.58, stdev=384.23, samples=19 00:21:14.188 iops : min= 222, max= 228, avg=224.74, stdev= 3.00, samples=19 00:21:14.188 lat (msec) : 20=100.00% 00:21:14.188 cpu : usr=91.48%, sys=7.92%, ctx=75, majf=0, minf=0 00:21:14.188 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:14.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:14.188 issued rwts: total=2247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:14.188 latency : target=0, window=0, percentile=100.00%, depth=3 00:21:14.188 00:21:14.188 Run status group 0 (all jobs): 00:21:14.188 READ: bw=84.2MiB/s (88.3MB/s), 28.1MiB/s-28.1MiB/s (29.4MB/s-29.4MB/s), io=843MiB (884MB), run=10001-10004msec 00:21:14.188 12:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:21:14.188 12:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:21:14.188 12:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:21:14.188 12:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:21:14.188 12:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:21:14.188 12:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:14.188 12:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.188 12:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:14.188 12:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.188 12:45:38 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:21:14.188 12:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.188 12:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:14.188 ************************************ 00:21:14.188 END TEST fio_dif_digest 00:21:14.188 ************************************ 00:21:14.188 12:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.188 00:21:14.188 real 0m11.070s 00:21:14.188 user 0m28.040s 00:21:14.188 sys 0m2.733s 00:21:14.188 12:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:14.188 12:45:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:21:14.188 12:45:38 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:21:14.188 12:45:38 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:21:14.188 12:45:38 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:21:14.188 12:45:38 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:14.188 12:45:38 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:21:14.188 12:45:38 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:14.188 12:45:38 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:21:14.188 12:45:38 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:14.188 12:45:38 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:14.188 rmmod nvme_tcp 00:21:14.188 rmmod nvme_fabrics 00:21:14.188 rmmod nvme_keyring 00:21:14.188 12:45:38 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:14.188 12:45:38 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:21:14.188 12:45:38 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:21:14.188 12:45:38 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 83302 ']' 00:21:14.188 12:45:38 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 83302 00:21:14.188 12:45:38 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 83302 ']' 00:21:14.188 12:45:38 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 83302 00:21:14.188 12:45:38 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:21:14.188 12:45:38 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:14.188 12:45:38 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 83302 00:21:14.188 killing process with pid 83302 00:21:14.188 12:45:38 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:14.188 12:45:38 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:14.188 12:45:38 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 83302' 00:21:14.188 12:45:38 nvmf_dif -- common/autotest_common.sh@967 -- # kill 83302 00:21:14.188 12:45:38 nvmf_dif -- common/autotest_common.sh@972 -- # wait 83302 00:21:14.188 12:45:38 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:14.188 12:45:38 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:14.188 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:14.188 Waiting for block devices as requested 00:21:14.188 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:14.188 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:14.188 12:45:39 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:14.188 12:45:39 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:14.188 12:45:39 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.188 12:45:39 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:14.188 12:45:39 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.188 12:45:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:14.188 12:45:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.188 12:45:39 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:14.188 ************************************ 00:21:14.188 END TEST nvmf_dif 00:21:14.188 ************************************ 00:21:14.188 00:21:14.188 real 1m0.067s 00:21:14.188 user 3m47.358s 00:21:14.188 sys 0m20.764s 00:21:14.188 12:45:39 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:14.188 12:45:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:21:14.188 12:45:39 -- common/autotest_common.sh@1142 -- # return 0 00:21:14.188 12:45:39 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:14.188 12:45:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:14.188 12:45:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:14.188 12:45:39 -- common/autotest_common.sh@10 -- # set +x 00:21:14.188 ************************************ 00:21:14.188 START TEST nvmf_abort_qd_sizes 00:21:14.188 ************************************ 00:21:14.188 12:45:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:21:14.188 * Looking for test storage... 00:21:14.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:14.188 12:45:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:14.188 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:21:14.188 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.188 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.188 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.188 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.188 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.188 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.188 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.188 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.188 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.188 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.188 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:14.189 Cannot find device "nvmf_tgt_br" 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:14.189 Cannot find device "nvmf_tgt_br2" 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:14.189 Cannot find device "nvmf_tgt_br" 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:14.189 Cannot find device "nvmf_tgt_br2" 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:14.189 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:14.189 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:14.189 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:14.190 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:14.190 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:14.190 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:14.190 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:14.190 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:14.190 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:14.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:21:14.190 00:21:14.190 --- 10.0.0.2 ping statistics --- 00:21:14.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.190 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:21:14.190 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:14.190 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:14.190 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:21:14.190 00:21:14.190 --- 10.0.0.3 ping statistics --- 00:21:14.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.190 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:21:14.190 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:14.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:21:14.190 00:21:14.190 --- 10.0.0.1 ping statistics --- 00:21:14.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.190 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:21:14.190 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.190 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:21:14.190 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:21:14.190 12:45:39 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:14.447 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:14.713 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:14.713 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:14.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=84645 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 84645 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 84645 ']' 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.713 12:45:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:14.987 [2024-07-12 12:45:40.797457] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:21:14.987 [2024-07-12 12:45:40.797562] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.987 [2024-07-12 12:45:40.944689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.244 [2024-07-12 12:45:41.077890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.244 [2024-07-12 12:45:41.078039] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.244 [2024-07-12 12:45:41.078113] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.244 [2024-07-12 12:45:41.078165] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.244 [2024-07-12 12:45:41.078196] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.244 [2024-07-12 12:45:41.078326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.244 [2024-07-12 12:45:41.078459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.244 [2024-07-12 12:45:41.079174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:15.244 [2024-07-12 12:45:41.079226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.244 [2024-07-12 12:45:41.142107] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:15.809 12:45:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:15.809 ************************************ 00:21:15.809 START TEST spdk_target_abort 00:21:15.809 ************************************ 00:21:15.809 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:21:15.809 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:21:15.809 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:21:15.809 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.809 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:16.067 spdk_targetn1 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:16.067 [2024-07-12 12:45:41.913042] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:16.067 [2024-07-12 12:45:41.941214] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:16.067 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:16.068 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:16.068 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:16.068 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:16.068 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:16.068 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:16.068 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:16.068 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:16.068 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:16.068 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:21:16.068 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:16.068 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:16.068 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:16.068 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:16.068 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:16.068 12:45:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:19.347 Initializing NVMe Controllers 00:21:19.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:19.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:19.347 Initialization complete. Launching workers. 00:21:19.347 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11180, failed: 0 00:21:19.347 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1017, failed to submit 10163 00:21:19.347 success 716, unsuccess 301, failed 0 00:21:19.347 12:45:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:19.347 12:45:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:22.644 Initializing NVMe Controllers 00:21:22.644 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:22.644 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:22.644 Initialization complete. Launching workers. 00:21:22.644 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8923, failed: 0 00:21:22.644 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1143, failed to submit 7780 00:21:22.644 success 423, unsuccess 720, failed 0 00:21:22.644 12:45:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:22.644 12:45:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:25.927 Initializing NVMe Controllers 00:21:25.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:21:25.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:25.927 Initialization complete. Launching workers. 00:21:25.927 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31678, failed: 0 00:21:25.927 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2438, failed to submit 29240 00:21:25.927 success 430, unsuccess 2008, failed 0 00:21:25.927 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:21:25.927 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.927 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:25.927 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.927 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:25.927 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.927 12:45:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:26.493 12:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.493 12:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84645 00:21:26.493 12:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 84645 ']' 00:21:26.493 12:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 84645 00:21:26.493 12:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:21:26.493 12:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:26.493 12:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 84645 00:21:26.493 killing process with pid 84645 00:21:26.493 12:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:26.493 12:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:26.493 12:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 84645' 00:21:26.493 12:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 84645 00:21:26.493 12:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 84645 00:21:26.751 ************************************ 00:21:26.751 END TEST spdk_target_abort 00:21:26.751 ************************************ 00:21:26.751 00:21:26.751 real 0m10.764s 00:21:26.751 user 0m43.178s 00:21:26.751 sys 0m2.351s 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:26.751 12:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:26.751 12:45:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:21:26.751 12:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:26.751 12:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:26.751 12:45:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:26.751 ************************************ 00:21:26.751 START TEST kernel_target_abort 00:21:26.751 ************************************ 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:26.751 12:45:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:27.009 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:27.009 Waiting for block devices as requested 00:21:27.009 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:27.266 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:27.266 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:27.266 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:27.266 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:21:27.266 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:21:27.266 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:21:27.266 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:27.266 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:21:27.266 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:21:27.266 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:27.266 No valid GPT data, bailing 00:21:27.266 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:21:27.523 No valid GPT data, bailing 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:21:27.523 No valid GPT data, bailing 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:27.523 No valid GPT data, bailing 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:21:27.523 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:27.780 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 --hostid=16360ad5-8c23-4d49-afe0-9a35c426fec5 -a 10.0.0.1 -t tcp -s 4420 00:21:27.780 00:21:27.780 Discovery Log Number of Records 2, Generation counter 2 00:21:27.780 =====Discovery Log Entry 0====== 00:21:27.780 trtype: tcp 00:21:27.780 adrfam: ipv4 00:21:27.780 subtype: current discovery subsystem 00:21:27.780 treq: not specified, sq flow control disable supported 00:21:27.780 portid: 1 00:21:27.780 trsvcid: 4420 00:21:27.780 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:27.780 traddr: 10.0.0.1 00:21:27.780 eflags: none 00:21:27.780 sectype: none 00:21:27.780 =====Discovery Log Entry 1====== 00:21:27.780 trtype: tcp 00:21:27.780 adrfam: ipv4 00:21:27.780 subtype: nvme subsystem 00:21:27.780 treq: not specified, sq flow control disable supported 00:21:27.780 portid: 1 00:21:27.780 trsvcid: 4420 00:21:27.780 subnqn: nqn.2016-06.io.spdk:testnqn 00:21:27.780 traddr: 10.0.0.1 00:21:27.780 eflags: none 00:21:27.780 sectype: none 00:21:27.780 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:21:27.780 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:27.780 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:27.780 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:27.780 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:27.780 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:21:27.780 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:27.780 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:27.781 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:27.781 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:27.781 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:27.781 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:27.781 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:27.781 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:27.781 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:27.781 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:27.781 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:27.781 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:27.781 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:27.781 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:27.781 12:45:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:31.103 Initializing NVMe Controllers 00:21:31.103 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:31.103 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:31.103 Initialization complete. Launching workers. 00:21:31.103 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33236, failed: 0 00:21:31.103 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33236, failed to submit 0 00:21:31.103 success 0, unsuccess 33236, failed 0 00:21:31.103 12:45:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:31.103 12:45:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:34.388 Initializing NVMe Controllers 00:21:34.388 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:34.388 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:34.388 Initialization complete. Launching workers. 00:21:34.388 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69614, failed: 0 00:21:34.388 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29846, failed to submit 39768 00:21:34.388 success 0, unsuccess 29846, failed 0 00:21:34.388 12:45:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:34.388 12:45:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:37.670 Initializing NVMe Controllers 00:21:37.670 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:37.670 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:37.670 Initialization complete. Launching workers. 00:21:37.670 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80217, failed: 0 00:21:37.670 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20032, failed to submit 60185 00:21:37.670 success 0, unsuccess 20032, failed 0 00:21:37.670 12:46:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:37.670 12:46:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:37.670 12:46:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:21:37.670 12:46:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:37.670 12:46:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:37.670 12:46:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:37.670 12:46:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:37.670 12:46:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:21:37.670 12:46:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:21:37.670 12:46:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:37.928 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:39.826 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:39.826 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:39.826 00:21:39.826 real 0m13.216s 00:21:39.826 user 0m6.108s 00:21:39.826 sys 0m4.414s 00:21:39.826 12:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:39.826 ************************************ 00:21:39.826 END TEST kernel_target_abort 00:21:39.826 ************************************ 00:21:39.826 12:46:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.084 rmmod nvme_tcp 00:21:40.084 rmmod nvme_fabrics 00:21:40.084 rmmod nvme_keyring 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 84645 ']' 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 84645 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 84645 ']' 00:21:40.084 Process with pid 84645 is not found 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 84645 00:21:40.084 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (84645) - No such process 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 84645 is not found' 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:21:40.084 12:46:05 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:40.343 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:40.343 Waiting for block devices as requested 00:21:40.343 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:40.602 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:40.602 12:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:40.602 12:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:40.602 12:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.602 12:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:40.602 12:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.602 12:46:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:40.602 12:46:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.602 12:46:06 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:40.602 00:21:40.602 real 0m27.128s 00:21:40.602 user 0m50.370s 00:21:40.602 sys 0m8.078s 00:21:40.602 ************************************ 00:21:40.602 END TEST nvmf_abort_qd_sizes 00:21:40.602 ************************************ 00:21:40.602 12:46:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:40.602 12:46:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:21:40.602 12:46:06 -- common/autotest_common.sh@1142 -- # return 0 00:21:40.602 12:46:06 -- spdk/autotest.sh@295 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:40.602 12:46:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:40.602 12:46:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.602 12:46:06 -- common/autotest_common.sh@10 -- # set +x 00:21:40.602 ************************************ 00:21:40.602 START TEST keyring_file 00:21:40.602 ************************************ 00:21:40.602 12:46:06 keyring_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:40.860 * Looking for test storage... 00:21:40.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:40.860 12:46:06 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:40.860 12:46:06 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:40.860 12:46:06 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.860 12:46:06 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.860 12:46:06 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.860 12:46:06 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.860 12:46:06 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.860 12:46:06 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.860 12:46:06 keyring_file -- paths/export.sh@5 -- # export PATH 00:21:40.860 12:46:06 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@47 -- # : 0 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:40.860 12:46:06 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:40.860 12:46:06 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:40.860 12:46:06 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:40.861 12:46:06 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:40.861 12:46:06 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:40.861 12:46:06 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:40.861 12:46:06 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:40.861 12:46:06 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.GNK3qxiTIN 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:40.861 12:46:06 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:40.861 12:46:06 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:40.861 12:46:06 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:40.861 12:46:06 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:40.861 12:46:06 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:40.861 12:46:06 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.GNK3qxiTIN 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.GNK3qxiTIN 00:21:40.861 12:46:06 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.GNK3qxiTIN 00:21:40.861 12:46:06 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@17 -- # name=key1 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qgLEPWWplE 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:40.861 12:46:06 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:40.861 12:46:06 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:40.861 12:46:06 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:40.861 12:46:06 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:40.861 12:46:06 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:40.861 12:46:06 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qgLEPWWplE 00:21:40.861 12:46:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qgLEPWWplE 00:21:40.861 12:46:06 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.qgLEPWWplE 00:21:40.861 12:46:06 keyring_file -- keyring/file.sh@30 -- # tgtpid=85515 00:21:40.861 12:46:06 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:40.861 12:46:06 keyring_file -- keyring/file.sh@32 -- # waitforlisten 85515 00:21:40.861 12:46:06 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85515 ']' 00:21:40.861 12:46:06 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.861 12:46:06 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:40.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.861 12:46:06 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.861 12:46:06 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:40.861 12:46:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:41.119 [2024-07-12 12:46:06.939948] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:21:41.119 [2024-07-12 12:46:06.940067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85515 ] 00:21:41.119 [2024-07-12 12:46:07.087466] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.378 [2024-07-12 12:46:07.220890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.378 [2024-07-12 12:46:07.280611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:41.947 12:46:07 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.947 12:46:07 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:41.947 12:46:07 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:21:41.947 12:46:07 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.947 12:46:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:41.947 [2024-07-12 12:46:07.957676] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.947 null0 00:21:41.947 [2024-07-12 12:46:07.989661] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:41.947 [2024-07-12 12:46:07.989931] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:41.947 [2024-07-12 12:46:07.997663] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:41.947 12:46:08 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.947 12:46:08 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:41.947 12:46:08 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:41.947 12:46:08 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:41.947 12:46:08 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:41.947 12:46:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.947 12:46:08 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:41.947 12:46:08 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:41.947 12:46:08 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:41.947 12:46:08 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.947 12:46:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:41.947 [2024-07-12 12:46:08.009671] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:21:41.947 request: 00:21:41.947 { 00:21:41.947 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:41.947 "secure_channel": false, 00:21:41.947 "listen_address": { 00:21:41.947 "trtype": "tcp", 00:21:41.947 "traddr": "127.0.0.1", 00:21:41.947 "trsvcid": "4420" 00:21:41.947 }, 00:21:41.947 "method": "nvmf_subsystem_add_listener", 00:21:41.947 "req_id": 1 00:21:41.947 } 00:21:41.947 Got JSON-RPC error response 00:21:41.947 response: 00:21:41.947 { 00:21:41.947 "code": -32602, 00:21:41.947 "message": "Invalid parameters" 00:21:41.947 } 00:21:41.947 12:46:08 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:41.947 12:46:08 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:41.947 12:46:08 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:41.947 12:46:08 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:41.947 12:46:08 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:41.947 12:46:08 keyring_file -- keyring/file.sh@46 -- # bperfpid=85528 00:21:42.205 12:46:08 keyring_file -- keyring/file.sh@48 -- # waitforlisten 85528 /var/tmp/bperf.sock 00:21:42.205 12:46:08 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85528 ']' 00:21:42.205 12:46:08 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:42.205 12:46:08 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.205 12:46:08 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:42.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:42.205 12:46:08 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.205 12:46:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:42.205 12:46:08 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:42.205 [2024-07-12 12:46:08.076754] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:21:42.205 [2024-07-12 12:46:08.076870] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85528 ] 00:21:42.205 [2024-07-12 12:46:08.213169] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.463 [2024-07-12 12:46:08.378174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.463 [2024-07-12 12:46:08.434923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:43.030 12:46:09 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.030 12:46:09 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:43.030 12:46:09 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GNK3qxiTIN 00:21:43.030 12:46:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GNK3qxiTIN 00:21:43.287 12:46:09 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qgLEPWWplE 00:21:43.287 12:46:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qgLEPWWplE 00:21:43.562 12:46:09 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:21:43.562 12:46:09 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:21:43.562 12:46:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:43.562 12:46:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:43.562 12:46:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:43.837 12:46:09 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.GNK3qxiTIN == \/\t\m\p\/\t\m\p\.\G\N\K\3\q\x\i\T\I\N ]] 00:21:43.837 12:46:09 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:21:43.837 12:46:09 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:21:43.837 12:46:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:43.837 12:46:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:43.837 12:46:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:44.095 12:46:10 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.qgLEPWWplE == \/\t\m\p\/\t\m\p\.\q\g\L\E\P\W\W\p\l\E ]] 00:21:44.095 12:46:10 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:21:44.095 12:46:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:44.095 12:46:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:44.095 12:46:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:44.095 12:46:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:44.095 12:46:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:44.353 12:46:10 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:44.353 12:46:10 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:21:44.353 12:46:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:44.353 12:46:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:44.353 12:46:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:44.353 12:46:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:44.353 12:46:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:44.611 12:46:10 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:44.611 12:46:10 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:44.611 12:46:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:44.870 [2024-07-12 12:46:10.807099] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.870 nvme0n1 00:21:44.870 12:46:10 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:21:44.870 12:46:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:44.870 12:46:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:44.870 12:46:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:44.870 12:46:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:44.870 12:46:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:45.128 12:46:11 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:45.128 12:46:11 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:21:45.128 12:46:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:45.128 12:46:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:45.128 12:46:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:45.128 12:46:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:45.128 12:46:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:45.386 12:46:11 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:21:45.386 12:46:11 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:45.644 Running I/O for 1 seconds... 00:21:46.580 00:21:46.580 Latency(us) 00:21:46.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.580 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:46.580 nvme0n1 : 1.01 11653.23 45.52 0.00 0.00 10941.56 5451.40 17754.30 00:21:46.580 =================================================================================================================== 00:21:46.580 Total : 11653.23 45.52 0.00 0.00 10941.56 5451.40 17754.30 00:21:46.580 0 00:21:46.580 12:46:12 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:46.580 12:46:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:46.838 12:46:12 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:21:46.838 12:46:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:46.838 12:46:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:46.838 12:46:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:46.838 12:46:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:46.838 12:46:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:47.108 12:46:13 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:21:47.108 12:46:13 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:21:47.108 12:46:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:47.108 12:46:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:47.108 12:46:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:47.108 12:46:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:47.108 12:46:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:47.367 12:46:13 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:47.367 12:46:13 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:47.367 12:46:13 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:47.367 12:46:13 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:47.367 12:46:13 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:47.367 12:46:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:47.367 12:46:13 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:47.367 12:46:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:47.367 12:46:13 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:47.367 12:46:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:47.626 [2024-07-12 12:46:13.534675] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:47.626 [2024-07-12 12:46:13.535301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213c4f0 (107): Transport endpoint is not connected 00:21:47.626 [2024-07-12 12:46:13.536293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x213c4f0 (9): Bad file descriptor 00:21:47.626 [2024-07-12 12:46:13.537291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:47.626 [2024-07-12 12:46:13.537323] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:47.626 [2024-07-12 12:46:13.537334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:47.626 request: 00:21:47.626 { 00:21:47.626 "name": "nvme0", 00:21:47.626 "trtype": "tcp", 00:21:47.626 "traddr": "127.0.0.1", 00:21:47.626 "adrfam": "ipv4", 00:21:47.626 "trsvcid": "4420", 00:21:47.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:47.626 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:47.626 "prchk_reftag": false, 00:21:47.626 "prchk_guard": false, 00:21:47.626 "hdgst": false, 00:21:47.626 "ddgst": false, 00:21:47.626 "psk": "key1", 00:21:47.626 "method": "bdev_nvme_attach_controller", 00:21:47.626 "req_id": 1 00:21:47.626 } 00:21:47.626 Got JSON-RPC error response 00:21:47.626 response: 00:21:47.626 { 00:21:47.626 "code": -5, 00:21:47.626 "message": "Input/output error" 00:21:47.626 } 00:21:47.626 12:46:13 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:47.626 12:46:13 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:47.626 12:46:13 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:47.626 12:46:13 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:47.626 12:46:13 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:21:47.626 12:46:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:47.626 12:46:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:47.626 12:46:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:47.626 12:46:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:47.626 12:46:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:47.884 12:46:13 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:21:47.884 12:46:13 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:21:47.884 12:46:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:47.884 12:46:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:47.884 12:46:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:47.884 12:46:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:47.884 12:46:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:48.140 12:46:14 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:48.141 12:46:14 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:21:48.141 12:46:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:48.398 12:46:14 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:21:48.398 12:46:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:48.657 12:46:14 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:21:48.657 12:46:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:48.657 12:46:14 keyring_file -- keyring/file.sh@77 -- # jq length 00:21:48.916 12:46:14 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:21:48.916 12:46:14 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.GNK3qxiTIN 00:21:48.916 12:46:14 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.GNK3qxiTIN 00:21:48.916 12:46:14 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:48.916 12:46:14 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.GNK3qxiTIN 00:21:48.916 12:46:14 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:48.916 12:46:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:48.916 12:46:14 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:48.916 12:46:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:48.916 12:46:14 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GNK3qxiTIN 00:21:48.916 12:46:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GNK3qxiTIN 00:21:49.175 [2024-07-12 12:46:15.061706] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.GNK3qxiTIN': 0100660 00:21:49.175 [2024-07-12 12:46:15.061764] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:49.175 request: 00:21:49.175 { 00:21:49.175 "name": "key0", 00:21:49.175 "path": "/tmp/tmp.GNK3qxiTIN", 00:21:49.175 "method": "keyring_file_add_key", 00:21:49.175 "req_id": 1 00:21:49.175 } 00:21:49.175 Got JSON-RPC error response 00:21:49.175 response: 00:21:49.175 { 00:21:49.175 "code": -1, 00:21:49.175 "message": "Operation not permitted" 00:21:49.175 } 00:21:49.175 12:46:15 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:49.175 12:46:15 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:49.175 12:46:15 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:49.175 12:46:15 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:49.175 12:46:15 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.GNK3qxiTIN 00:21:49.175 12:46:15 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GNK3qxiTIN 00:21:49.175 12:46:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GNK3qxiTIN 00:21:49.434 12:46:15 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.GNK3qxiTIN 00:21:49.434 12:46:15 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:21:49.434 12:46:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:49.434 12:46:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:49.434 12:46:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:49.434 12:46:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:49.434 12:46:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:49.692 12:46:15 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:21:49.692 12:46:15 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:49.692 12:46:15 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:21:49.692 12:46:15 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:49.692 12:46:15 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:21:49.692 12:46:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:49.692 12:46:15 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:21:49.692 12:46:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:49.692 12:46:15 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:49.692 12:46:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:49.951 [2024-07-12 12:46:15.861898] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.GNK3qxiTIN': No such file or directory 00:21:49.951 [2024-07-12 12:46:15.861968] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:49.951 [2024-07-12 12:46:15.861993] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:49.951 [2024-07-12 12:46:15.862002] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:49.951 [2024-07-12 12:46:15.862010] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:49.951 request: 00:21:49.951 { 00:21:49.951 "name": "nvme0", 00:21:49.951 "trtype": "tcp", 00:21:49.951 "traddr": "127.0.0.1", 00:21:49.951 "adrfam": "ipv4", 00:21:49.951 "trsvcid": "4420", 00:21:49.951 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.951 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:49.951 "prchk_reftag": false, 00:21:49.951 "prchk_guard": false, 00:21:49.951 "hdgst": false, 00:21:49.951 "ddgst": false, 00:21:49.951 "psk": "key0", 00:21:49.951 "method": "bdev_nvme_attach_controller", 00:21:49.951 "req_id": 1 00:21:49.951 } 00:21:49.951 Got JSON-RPC error response 00:21:49.951 response: 00:21:49.951 { 00:21:49.951 "code": -19, 00:21:49.951 "message": "No such device" 00:21:49.951 } 00:21:49.951 12:46:15 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:21:49.951 12:46:15 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:49.951 12:46:15 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:49.951 12:46:15 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:49.951 12:46:15 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:21:49.951 12:46:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:50.210 12:46:16 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:50.210 12:46:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:50.210 12:46:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:50.210 12:46:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:50.210 12:46:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:50.210 12:46:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:50.210 12:46:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nKzTflCqei 00:21:50.210 12:46:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:50.210 12:46:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:50.210 12:46:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:21:50.210 12:46:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:50.210 12:46:16 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:50.210 12:46:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:21:50.210 12:46:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:21:50.210 12:46:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nKzTflCqei 00:21:50.210 12:46:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nKzTflCqei 00:21:50.210 12:46:16 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.nKzTflCqei 00:21:50.210 12:46:16 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nKzTflCqei 00:21:50.210 12:46:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nKzTflCqei 00:21:50.468 12:46:16 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:50.468 12:46:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:50.726 nvme0n1 00:21:50.726 12:46:16 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:21:50.726 12:46:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:50.726 12:46:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:50.726 12:46:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:50.726 12:46:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:50.726 12:46:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:50.984 12:46:16 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:21:50.984 12:46:17 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:21:50.984 12:46:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:51.241 12:46:17 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:21:51.241 12:46:17 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:21:51.241 12:46:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:51.241 12:46:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:51.241 12:46:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:51.499 12:46:17 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:21:51.499 12:46:17 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:21:51.499 12:46:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:51.499 12:46:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:51.499 12:46:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:51.499 12:46:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:51.499 12:46:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:51.757 12:46:17 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:21:51.757 12:46:17 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:51.757 12:46:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:52.015 12:46:18 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:21:52.015 12:46:18 keyring_file -- keyring/file.sh@104 -- # jq length 00:21:52.015 12:46:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:52.274 12:46:18 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:21:52.274 12:46:18 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nKzTflCqei 00:21:52.274 12:46:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nKzTflCqei 00:21:52.533 12:46:18 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qgLEPWWplE 00:21:52.533 12:46:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qgLEPWWplE 00:21:52.791 12:46:18 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:52.791 12:46:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:53.049 nvme0n1 00:21:53.049 12:46:19 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:21:53.049 12:46:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:53.308 12:46:19 keyring_file -- keyring/file.sh@112 -- # config='{ 00:21:53.308 "subsystems": [ 00:21:53.308 { 00:21:53.308 "subsystem": "keyring", 00:21:53.308 "config": [ 00:21:53.308 { 00:21:53.308 "method": "keyring_file_add_key", 00:21:53.308 "params": { 00:21:53.308 "name": "key0", 00:21:53.308 "path": "/tmp/tmp.nKzTflCqei" 00:21:53.308 } 00:21:53.308 }, 00:21:53.308 { 00:21:53.308 "method": "keyring_file_add_key", 00:21:53.308 "params": { 00:21:53.308 "name": "key1", 00:21:53.308 "path": "/tmp/tmp.qgLEPWWplE" 00:21:53.308 } 00:21:53.308 } 00:21:53.308 ] 00:21:53.308 }, 00:21:53.308 { 00:21:53.308 "subsystem": "iobuf", 00:21:53.308 "config": [ 00:21:53.308 { 00:21:53.308 "method": "iobuf_set_options", 00:21:53.308 "params": { 00:21:53.308 "small_pool_count": 8192, 00:21:53.308 "large_pool_count": 1024, 00:21:53.308 "small_bufsize": 8192, 00:21:53.308 "large_bufsize": 135168 00:21:53.308 } 00:21:53.308 } 00:21:53.308 ] 00:21:53.308 }, 00:21:53.308 { 00:21:53.308 "subsystem": "sock", 00:21:53.308 "config": [ 00:21:53.308 { 00:21:53.308 "method": "sock_set_default_impl", 00:21:53.308 "params": { 00:21:53.308 "impl_name": "uring" 00:21:53.308 } 00:21:53.308 }, 00:21:53.308 { 00:21:53.308 "method": "sock_impl_set_options", 00:21:53.308 "params": { 00:21:53.308 "impl_name": "ssl", 00:21:53.308 "recv_buf_size": 4096, 00:21:53.308 "send_buf_size": 4096, 00:21:53.308 "enable_recv_pipe": true, 00:21:53.308 "enable_quickack": false, 00:21:53.308 "enable_placement_id": 0, 00:21:53.308 "enable_zerocopy_send_server": true, 00:21:53.308 "enable_zerocopy_send_client": false, 00:21:53.308 "zerocopy_threshold": 0, 00:21:53.308 "tls_version": 0, 00:21:53.308 "enable_ktls": false 00:21:53.308 } 00:21:53.308 }, 00:21:53.308 { 00:21:53.308 "method": "sock_impl_set_options", 00:21:53.308 "params": { 00:21:53.308 "impl_name": "posix", 00:21:53.308 "recv_buf_size": 2097152, 00:21:53.308 "send_buf_size": 2097152, 00:21:53.308 "enable_recv_pipe": true, 00:21:53.308 "enable_quickack": false, 00:21:53.308 "enable_placement_id": 0, 00:21:53.308 "enable_zerocopy_send_server": true, 00:21:53.308 "enable_zerocopy_send_client": false, 00:21:53.308 "zerocopy_threshold": 0, 00:21:53.308 "tls_version": 0, 00:21:53.308 "enable_ktls": false 00:21:53.308 } 00:21:53.308 }, 00:21:53.308 { 00:21:53.308 "method": "sock_impl_set_options", 00:21:53.308 "params": { 00:21:53.308 "impl_name": "uring", 00:21:53.308 "recv_buf_size": 2097152, 00:21:53.308 "send_buf_size": 2097152, 00:21:53.308 "enable_recv_pipe": true, 00:21:53.308 "enable_quickack": false, 00:21:53.309 "enable_placement_id": 0, 00:21:53.309 "enable_zerocopy_send_server": false, 00:21:53.309 "enable_zerocopy_send_client": false, 00:21:53.309 "zerocopy_threshold": 0, 00:21:53.309 "tls_version": 0, 00:21:53.309 "enable_ktls": false 00:21:53.309 } 00:21:53.309 } 00:21:53.309 ] 00:21:53.309 }, 00:21:53.309 { 00:21:53.309 "subsystem": "vmd", 00:21:53.309 "config": [] 00:21:53.309 }, 00:21:53.309 { 00:21:53.309 "subsystem": "accel", 00:21:53.309 "config": [ 00:21:53.309 { 00:21:53.309 "method": "accel_set_options", 00:21:53.309 "params": { 00:21:53.309 "small_cache_size": 128, 00:21:53.309 "large_cache_size": 16, 00:21:53.309 "task_count": 2048, 00:21:53.309 "sequence_count": 2048, 00:21:53.309 "buf_count": 2048 00:21:53.309 } 00:21:53.309 } 00:21:53.309 ] 00:21:53.309 }, 00:21:53.309 { 00:21:53.309 "subsystem": "bdev", 00:21:53.309 "config": [ 00:21:53.309 { 00:21:53.309 "method": "bdev_set_options", 00:21:53.309 "params": { 00:21:53.309 "bdev_io_pool_size": 65535, 00:21:53.309 "bdev_io_cache_size": 256, 00:21:53.309 "bdev_auto_examine": true, 00:21:53.309 "iobuf_small_cache_size": 128, 00:21:53.309 "iobuf_large_cache_size": 16 00:21:53.309 } 00:21:53.309 }, 00:21:53.309 { 00:21:53.309 "method": "bdev_raid_set_options", 00:21:53.309 "params": { 00:21:53.309 "process_window_size_kb": 1024 00:21:53.309 } 00:21:53.309 }, 00:21:53.309 { 00:21:53.309 "method": "bdev_iscsi_set_options", 00:21:53.309 "params": { 00:21:53.309 "timeout_sec": 30 00:21:53.309 } 00:21:53.309 }, 00:21:53.309 { 00:21:53.309 "method": "bdev_nvme_set_options", 00:21:53.309 "params": { 00:21:53.309 "action_on_timeout": "none", 00:21:53.309 "timeout_us": 0, 00:21:53.309 "timeout_admin_us": 0, 00:21:53.309 "keep_alive_timeout_ms": 10000, 00:21:53.309 "arbitration_burst": 0, 00:21:53.309 "low_priority_weight": 0, 00:21:53.309 "medium_priority_weight": 0, 00:21:53.309 "high_priority_weight": 0, 00:21:53.309 "nvme_adminq_poll_period_us": 10000, 00:21:53.309 "nvme_ioq_poll_period_us": 0, 00:21:53.309 "io_queue_requests": 512, 00:21:53.309 "delay_cmd_submit": true, 00:21:53.309 "transport_retry_count": 4, 00:21:53.309 "bdev_retry_count": 3, 00:21:53.309 "transport_ack_timeout": 0, 00:21:53.309 "ctrlr_loss_timeout_sec": 0, 00:21:53.309 "reconnect_delay_sec": 0, 00:21:53.309 "fast_io_fail_timeout_sec": 0, 00:21:53.309 "disable_auto_failback": false, 00:21:53.309 "generate_uuids": false, 00:21:53.309 "transport_tos": 0, 00:21:53.309 "nvme_error_stat": false, 00:21:53.309 "rdma_srq_size": 0, 00:21:53.309 "io_path_stat": false, 00:21:53.309 "allow_accel_sequence": false, 00:21:53.309 "rdma_max_cq_size": 0, 00:21:53.309 "rdma_cm_event_timeout_ms": 0, 00:21:53.309 "dhchap_digests": [ 00:21:53.309 "sha256", 00:21:53.309 "sha384", 00:21:53.309 "sha512" 00:21:53.309 ], 00:21:53.309 "dhchap_dhgroups": [ 00:21:53.309 "null", 00:21:53.309 "ffdhe2048", 00:21:53.309 "ffdhe3072", 00:21:53.309 "ffdhe4096", 00:21:53.309 "ffdhe6144", 00:21:53.309 "ffdhe8192" 00:21:53.309 ] 00:21:53.309 } 00:21:53.309 }, 00:21:53.309 { 00:21:53.309 "method": "bdev_nvme_attach_controller", 00:21:53.309 "params": { 00:21:53.309 "name": "nvme0", 00:21:53.309 "trtype": "TCP", 00:21:53.309 "adrfam": "IPv4", 00:21:53.309 "traddr": "127.0.0.1", 00:21:53.309 "trsvcid": "4420", 00:21:53.309 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:53.309 "prchk_reftag": false, 00:21:53.309 "prchk_guard": false, 00:21:53.309 "ctrlr_loss_timeout_sec": 0, 00:21:53.309 "reconnect_delay_sec": 0, 00:21:53.309 "fast_io_fail_timeout_sec": 0, 00:21:53.309 "psk": "key0", 00:21:53.309 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:53.309 "hdgst": false, 00:21:53.309 "ddgst": false 00:21:53.309 } 00:21:53.309 }, 00:21:53.309 { 00:21:53.309 "method": "bdev_nvme_set_hotplug", 00:21:53.309 "params": { 00:21:53.309 "period_us": 100000, 00:21:53.309 "enable": false 00:21:53.309 } 00:21:53.309 }, 00:21:53.309 { 00:21:53.309 "method": "bdev_wait_for_examine" 00:21:53.309 } 00:21:53.309 ] 00:21:53.309 }, 00:21:53.309 { 00:21:53.309 "subsystem": "nbd", 00:21:53.309 "config": [] 00:21:53.309 } 00:21:53.309 ] 00:21:53.309 }' 00:21:53.309 12:46:19 keyring_file -- keyring/file.sh@114 -- # killprocess 85528 00:21:53.309 12:46:19 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85528 ']' 00:21:53.309 12:46:19 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85528 00:21:53.309 12:46:19 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:53.309 12:46:19 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:53.309 12:46:19 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85528 00:21:53.309 killing process with pid 85528 00:21:53.310 Received shutdown signal, test time was about 1.000000 seconds 00:21:53.310 00:21:53.310 Latency(us) 00:21:53.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.310 =================================================================================================================== 00:21:53.310 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:53.310 12:46:19 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:53.310 12:46:19 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:53.310 12:46:19 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85528' 00:21:53.310 12:46:19 keyring_file -- common/autotest_common.sh@967 -- # kill 85528 00:21:53.310 12:46:19 keyring_file -- common/autotest_common.sh@972 -- # wait 85528 00:21:53.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:53.569 12:46:19 keyring_file -- keyring/file.sh@117 -- # bperfpid=85776 00:21:53.569 12:46:19 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:53.569 12:46:19 keyring_file -- keyring/file.sh@119 -- # waitforlisten 85776 /var/tmp/bperf.sock 00:21:53.569 12:46:19 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 85776 ']' 00:21:53.569 12:46:19 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:53.569 12:46:19 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:53.569 12:46:19 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:53.569 12:46:19 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:21:53.569 "subsystems": [ 00:21:53.569 { 00:21:53.569 "subsystem": "keyring", 00:21:53.569 "config": [ 00:21:53.569 { 00:21:53.569 "method": "keyring_file_add_key", 00:21:53.569 "params": { 00:21:53.569 "name": "key0", 00:21:53.569 "path": "/tmp/tmp.nKzTflCqei" 00:21:53.569 } 00:21:53.569 }, 00:21:53.569 { 00:21:53.569 "method": "keyring_file_add_key", 00:21:53.569 "params": { 00:21:53.569 "name": "key1", 00:21:53.569 "path": "/tmp/tmp.qgLEPWWplE" 00:21:53.569 } 00:21:53.569 } 00:21:53.569 ] 00:21:53.569 }, 00:21:53.569 { 00:21:53.569 "subsystem": "iobuf", 00:21:53.569 "config": [ 00:21:53.569 { 00:21:53.569 "method": "iobuf_set_options", 00:21:53.569 "params": { 00:21:53.569 "small_pool_count": 8192, 00:21:53.569 "large_pool_count": 1024, 00:21:53.569 "small_bufsize": 8192, 00:21:53.569 "large_bufsize": 135168 00:21:53.569 } 00:21:53.569 } 00:21:53.569 ] 00:21:53.569 }, 00:21:53.569 { 00:21:53.569 "subsystem": "sock", 00:21:53.569 "config": [ 00:21:53.569 { 00:21:53.569 "method": "sock_set_default_impl", 00:21:53.569 "params": { 00:21:53.569 "impl_name": "uring" 00:21:53.569 } 00:21:53.569 }, 00:21:53.569 { 00:21:53.569 "method": "sock_impl_set_options", 00:21:53.569 "params": { 00:21:53.569 "impl_name": "ssl", 00:21:53.569 "recv_buf_size": 4096, 00:21:53.569 "send_buf_size": 4096, 00:21:53.569 "enable_recv_pipe": true, 00:21:53.569 "enable_quickack": false, 00:21:53.569 "enable_placement_id": 0, 00:21:53.569 "enable_zerocopy_send_server": true, 00:21:53.569 "enable_zerocopy_send_client": false, 00:21:53.569 "zerocopy_threshold": 0, 00:21:53.569 "tls_version": 0, 00:21:53.569 "enable_ktls": false 00:21:53.569 } 00:21:53.569 }, 00:21:53.569 { 00:21:53.569 "method": "sock_impl_set_options", 00:21:53.569 "params": { 00:21:53.569 "impl_name": "posix", 00:21:53.570 "recv_buf_size": 2097152, 00:21:53.570 "send_buf_size": 2097152, 00:21:53.570 "enable_recv_pipe": true, 00:21:53.570 "enable_quickack": false, 00:21:53.570 "enable_placement_id": 0, 00:21:53.570 "enable_zerocopy_send_server": true, 00:21:53.570 "enable_zerocopy_send_client": false, 00:21:53.570 "zerocopy_threshold": 0, 00:21:53.570 "tls_version": 0, 00:21:53.570 "enable_ktls": false 00:21:53.570 } 00:21:53.570 }, 00:21:53.570 { 00:21:53.570 "method": "sock_impl_set_options", 00:21:53.570 "params": { 00:21:53.570 "impl_name": "uring", 00:21:53.570 "recv_buf_size": 2097152, 00:21:53.570 "send_buf_size": 2097152, 00:21:53.570 "enable_recv_pipe": true, 00:21:53.570 "enable_quickack": false, 00:21:53.570 "enable_placement_id": 0, 00:21:53.570 "enable_zerocopy_send_server": false, 00:21:53.570 "enable_zerocopy_send_client": false, 00:21:53.570 "zerocopy_threshold": 0, 00:21:53.570 "tls_version": 0, 00:21:53.570 "enable_ktls": false 00:21:53.570 } 00:21:53.570 } 00:21:53.570 ] 00:21:53.570 }, 00:21:53.570 { 00:21:53.570 "subsystem": "vmd", 00:21:53.570 "config": [] 00:21:53.570 }, 00:21:53.570 { 00:21:53.570 "subsystem": "accel", 00:21:53.570 "config": [ 00:21:53.570 { 00:21:53.570 "method": "accel_set_options", 00:21:53.570 "params": { 00:21:53.570 "small_cache_size": 128, 00:21:53.570 "large_cache_size": 16, 00:21:53.570 "task_count": 2048, 00:21:53.570 "sequence_count": 2048, 00:21:53.570 "buf_count": 2048 00:21:53.570 } 00:21:53.570 } 00:21:53.570 ] 00:21:53.570 }, 00:21:53.570 { 00:21:53.570 "subsystem": "bdev", 00:21:53.570 "config": [ 00:21:53.570 { 00:21:53.570 "method": "bdev_set_options", 00:21:53.570 "params": { 00:21:53.570 "bdev_io_pool_size": 65535, 00:21:53.570 "bdev_io_cache_size": 256, 00:21:53.570 "bdev_auto_examine": true, 00:21:53.570 "iobuf_small_cache_size": 128, 00:21:53.570 "iobuf_large_cache_size": 16 00:21:53.570 } 00:21:53.570 }, 00:21:53.570 { 00:21:53.570 "method": "bdev_raid_set_options", 00:21:53.570 "params": { 00:21:53.570 "process_window_size_kb": 1024 00:21:53.570 } 00:21:53.570 }, 00:21:53.570 { 00:21:53.570 "method": "bdev_iscsi_set_options", 00:21:53.570 "params": { 00:21:53.570 "timeout_sec": 30 00:21:53.570 } 00:21:53.570 }, 00:21:53.570 { 00:21:53.570 "method": "bdev_nvme_set_options", 00:21:53.570 "params": { 00:21:53.570 "action_on_timeout": "none", 00:21:53.570 "timeout_us": 0, 00:21:53.570 "timeout_admin_us": 0, 00:21:53.570 "keep_alive_timeout_ms": 10000, 00:21:53.570 "arbitration_burst": 0, 00:21:53.570 "low_priority_weight": 0, 00:21:53.570 "medium_priority_weight": 0, 00:21:53.570 "high_priority_weight": 0, 00:21:53.570 "nvme_adminq_poll_period_us": 10000, 00:21:53.570 "nvme_ioq_poll_period_us": 0, 00:21:53.570 "io_queue_requests": 512, 00:21:53.570 "delay_cmd_submit": true, 00:21:53.570 "transport_retry_count": 4, 00:21:53.570 "bdev_retry_count": 3, 00:21:53.570 "transport_ack_timeout": 0, 00:21:53.570 "ctrlr_loss_timeout_sec": 0, 00:21:53.570 "reconnect_delay_sec": 0, 00:21:53.570 "fast_io_fail_timeout_sec": 0, 00:21:53.570 "disable_auto_failback": false, 00:21:53.570 "generate_uuids": false, 00:21:53.570 "transport_tos": 0, 00:21:53.570 "nvme_error_stat": false, 00:21:53.570 "rdma_srq_size": 0, 00:21:53.570 "io_path_stat": false, 00:21:53.570 "allow_accel_sequence": false, 00:21:53.570 "rdma_max_cq_size": 0, 00:21:53.570 "rdma_cm_event_timeout_ms": 0, 00:21:53.570 "dhchap_digests": [ 00:21:53.570 "sha256", 00:21:53.570 "sha384", 00:21:53.570 "sha512" 00:21:53.570 ], 00:21:53.570 "dhchap_dhgroups": [ 00:21:53.570 "null", 00:21:53.570 "ffdhe2048", 00:21:53.570 "ffdhe3072", 00:21:53.570 "ffdhe4096", 00:21:53.570 "ffdhe6144", 00:21:53.570 "ffdhe8192" 00:21:53.570 ] 00:21:53.570 } 00:21:53.570 }, 00:21:53.570 { 00:21:53.570 "method": "bdev_nvme_attach_controller", 00:21:53.570 "params": { 00:21:53.570 "name": "nvme0", 00:21:53.570 "trtype": "TCP", 00:21:53.570 "adrfam": "IPv4", 00:21:53.570 "traddr": "127.0.0.1", 00:21:53.570 "trsvcid": "4420", 00:21:53.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:53.570 "prchk_reftag": false, 00:21:53.570 "prchk_guard": false, 00:21:53.570 "ctrlr_loss_timeout_sec": 0, 00:21:53.570 "reconnect_delay_sec": 0, 00:21:53.570 "fast_io_fail_timeout_sec": 0, 00:21:53.570 "psk": "key0", 00:21:53.570 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:53.570 "hdgst": false, 00:21:53.570 "ddgst": false 00:21:53.570 } 00:21:53.570 }, 00:21:53.570 { 00:21:53.570 "method": "bdev_nvme_set_hotplug", 00:21:53.570 "params": { 00:21:53.570 "period_us": 100000, 00:21:53.570 "enable": false 00:21:53.570 } 00:21:53.570 }, 00:21:53.570 { 00:21:53.570 "method": "bdev_wait_for_examine" 00:21:53.570 } 00:21:53.570 ] 00:21:53.570 }, 00:21:53.570 { 00:21:53.570 "subsystem": "nbd", 00:21:53.570 "config": [] 00:21:53.570 } 00:21:53.570 ] 00:21:53.570 }' 00:21:53.570 12:46:19 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:53.570 12:46:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:53.829 [2024-07-12 12:46:19.654647] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:21:53.829 [2024-07-12 12:46:19.654976] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85776 ] 00:21:53.829 [2024-07-12 12:46:19.786631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.088 [2024-07-12 12:46:19.917133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.088 [2024-07-12 12:46:20.055372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:54.088 [2024-07-12 12:46:20.112577] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:54.656 12:46:20 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:54.656 12:46:20 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:21:54.656 12:46:20 keyring_file -- keyring/file.sh@120 -- # jq length 00:21:54.656 12:46:20 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:21:54.656 12:46:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:54.914 12:46:20 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:21:54.914 12:46:20 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:21:54.914 12:46:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:54.914 12:46:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:54.914 12:46:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:54.914 12:46:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:54.914 12:46:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.172 12:46:21 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:55.172 12:46:21 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:21:55.172 12:46:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:55.172 12:46:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:55.172 12:46:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:55.172 12:46:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:55.172 12:46:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:55.431 12:46:21 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:21:55.431 12:46:21 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:21:55.431 12:46:21 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:21:55.431 12:46:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:55.690 12:46:21 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:21:55.690 12:46:21 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:55.690 12:46:21 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.nKzTflCqei /tmp/tmp.qgLEPWWplE 00:21:55.690 12:46:21 keyring_file -- keyring/file.sh@20 -- # killprocess 85776 00:21:55.690 12:46:21 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85776 ']' 00:21:55.690 12:46:21 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85776 00:21:55.690 12:46:21 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:55.690 12:46:21 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:55.690 12:46:21 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85776 00:21:55.690 killing process with pid 85776 00:21:55.690 Received shutdown signal, test time was about 1.000000 seconds 00:21:55.690 00:21:55.690 Latency(us) 00:21:55.690 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.690 =================================================================================================================== 00:21:55.690 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:55.690 12:46:21 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:55.690 12:46:21 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:55.690 12:46:21 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85776' 00:21:55.690 12:46:21 keyring_file -- common/autotest_common.sh@967 -- # kill 85776 00:21:55.690 12:46:21 keyring_file -- common/autotest_common.sh@972 -- # wait 85776 00:21:55.949 12:46:21 keyring_file -- keyring/file.sh@21 -- # killprocess 85515 00:21:55.949 12:46:21 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 85515 ']' 00:21:55.949 12:46:21 keyring_file -- common/autotest_common.sh@952 -- # kill -0 85515 00:21:55.949 12:46:21 keyring_file -- common/autotest_common.sh@953 -- # uname 00:21:55.949 12:46:21 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:55.949 12:46:21 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85515 00:21:55.949 killing process with pid 85515 00:21:55.949 12:46:22 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:55.949 12:46:22 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:55.949 12:46:22 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85515' 00:21:55.949 12:46:22 keyring_file -- common/autotest_common.sh@967 -- # kill 85515 00:21:55.949 [2024-07-12 12:46:22.009145] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:55.949 12:46:22 keyring_file -- common/autotest_common.sh@972 -- # wait 85515 00:21:56.515 ************************************ 00:21:56.515 END TEST keyring_file 00:21:56.515 ************************************ 00:21:56.515 00:21:56.515 real 0m15.829s 00:21:56.515 user 0m39.103s 00:21:56.515 sys 0m3.143s 00:21:56.515 12:46:22 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:56.515 12:46:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:56.515 12:46:22 -- common/autotest_common.sh@1142 -- # return 0 00:21:56.515 12:46:22 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:21:56.515 12:46:22 -- spdk/autotest.sh@297 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:56.515 12:46:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:56.515 12:46:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.515 12:46:22 -- common/autotest_common.sh@10 -- # set +x 00:21:56.515 ************************************ 00:21:56.515 START TEST keyring_linux 00:21:56.515 ************************************ 00:21:56.515 12:46:22 keyring_linux -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:56.515 * Looking for test storage... 00:21:56.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:56.774 12:46:22 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:56.774 12:46:22 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:16360ad5-8c23-4d49-afe0-9a35c426fec5 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=16360ad5-8c23-4d49-afe0-9a35c426fec5 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:56.774 12:46:22 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.774 12:46:22 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.774 12:46:22 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.774 12:46:22 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.774 12:46:22 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.774 12:46:22 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.774 12:46:22 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:56.774 12:46:22 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:56.774 12:46:22 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:56.774 12:46:22 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:56.774 12:46:22 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:56.774 12:46:22 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:56.774 12:46:22 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:56.774 12:46:22 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:56.774 12:46:22 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:56.774 12:46:22 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:56.774 12:46:22 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:56.774 12:46:22 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:56.775 12:46:22 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:56.775 12:46:22 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:56.775 12:46:22 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:56.775 12:46:22 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:56.775 12:46:22 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:56.775 12:46:22 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:21:56.775 12:46:22 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:56.775 12:46:22 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:56.775 12:46:22 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:21:56.775 12:46:22 keyring_linux -- nvmf/common.sh@705 -- # python - 00:21:56.775 12:46:22 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:56.775 12:46:22 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:56.775 /tmp/:spdk-test:key0 00:21:56.775 12:46:22 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:56.775 12:46:22 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:56.775 12:46:22 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:56.775 12:46:22 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:56.775 12:46:22 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:56.775 12:46:22 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:56.775 12:46:22 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:56.775 12:46:22 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:56.775 12:46:22 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:21:56.775 12:46:22 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:56.775 12:46:22 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:21:56.775 12:46:22 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:21:56.775 12:46:22 keyring_linux -- nvmf/common.sh@705 -- # python - 00:21:56.775 12:46:22 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:56.775 /tmp/:spdk-test:key1 00:21:56.775 12:46:22 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:56.775 12:46:22 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85890 00:21:56.775 12:46:22 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:56.775 12:46:22 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85890 00:21:56.775 12:46:22 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85890 ']' 00:21:56.775 12:46:22 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.775 12:46:22 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.775 12:46:22 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.775 12:46:22 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.775 12:46:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:56.775 [2024-07-12 12:46:22.788813] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:21:56.775 [2024-07-12 12:46:22.789177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85890 ] 00:21:57.051 [2024-07-12 12:46:22.929429] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.051 [2024-07-12 12:46:23.045399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.051 [2024-07-12 12:46:23.110443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:58.002 12:46:23 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.002 12:46:23 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:21:58.002 12:46:23 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:58.002 12:46:23 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.002 12:46:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:58.002 [2024-07-12 12:46:23.750055] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.002 null0 00:21:58.002 [2024-07-12 12:46:23.782028] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:58.002 [2024-07-12 12:46:23.782331] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:58.002 12:46:23 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.002 12:46:23 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:58.002 955982341 00:21:58.002 12:46:23 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:58.002 786995002 00:21:58.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:58.002 12:46:23 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85908 00:21:58.002 12:46:23 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:58.002 12:46:23 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85908 /var/tmp/bperf.sock 00:21:58.002 12:46:23 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 85908 ']' 00:21:58.002 12:46:23 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:58.002 12:46:23 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.002 12:46:23 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:58.002 12:46:23 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.002 12:46:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:58.002 [2024-07-12 12:46:23.867268] Starting SPDK v24.09-pre git sha1 07d3b03c8 / DPDK 24.03.0 initialization... 00:21:58.002 [2024-07-12 12:46:23.867640] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85908 ] 00:21:58.002 [2024-07-12 12:46:24.008303] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.259 [2024-07-12 12:46:24.155582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.840 12:46:24 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.840 12:46:24 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:21:58.840 12:46:24 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:58.840 12:46:24 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:59.129 12:46:25 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:59.129 12:46:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:59.387 [2024-07-12 12:46:25.312548] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementaion override: uring 00:21:59.387 12:46:25 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:59.387 12:46:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:59.645 [2024-07-12 12:46:25.567578] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:59.645 nvme0n1 00:21:59.645 12:46:25 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:59.645 12:46:25 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:59.645 12:46:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:59.645 12:46:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:59.645 12:46:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:59.645 12:46:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:59.903 12:46:25 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:59.903 12:46:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:59.903 12:46:25 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:59.903 12:46:25 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:59.903 12:46:25 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:59.903 12:46:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:59.903 12:46:25 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:22:00.161 12:46:26 keyring_linux -- keyring/linux.sh@25 -- # sn=955982341 00:22:00.161 12:46:26 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:22:00.161 12:46:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:00.161 12:46:26 keyring_linux -- keyring/linux.sh@26 -- # [[ 955982341 == \9\5\5\9\8\2\3\4\1 ]] 00:22:00.161 12:46:26 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 955982341 00:22:00.161 12:46:26 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:22:00.161 12:46:26 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:00.419 Running I/O for 1 seconds... 00:22:01.354 00:22:01.354 Latency(us) 00:22:01.354 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.354 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:01.354 nvme0n1 : 1.01 11803.82 46.11 0.00 0.00 10779.93 5391.83 14358.34 00:22:01.354 =================================================================================================================== 00:22:01.354 Total : 11803.82 46.11 0.00 0.00 10779.93 5391.83 14358.34 00:22:01.354 0 00:22:01.354 12:46:27 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:22:01.354 12:46:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:22:01.612 12:46:27 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:22:01.612 12:46:27 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:22:01.612 12:46:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:22:01.612 12:46:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:22:01.612 12:46:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:22:01.612 12:46:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:22:01.870 12:46:27 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:22:01.870 12:46:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:22:01.870 12:46:27 keyring_linux -- keyring/linux.sh@23 -- # return 00:22:01.870 12:46:27 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:01.870 12:46:27 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:22:01.870 12:46:27 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:01.870 12:46:27 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:22:01.870 12:46:27 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.870 12:46:27 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:22:01.870 12:46:27 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:01.870 12:46:27 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:01.870 12:46:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:22:02.128 [2024-07-12 12:46:28.065687] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:02.128 [2024-07-12 12:46:28.065938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2388460 (107): Transport endpoint is not connected 00:22:02.128 [2024-07-12 12:46:28.066926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2388460 (9): Bad file descriptor 00:22:02.128 [2024-07-12 12:46:28.067923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:02.128 [2024-07-12 12:46:28.067948] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:22:02.128 [2024-07-12 12:46:28.067959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:02.128 request: 00:22:02.128 { 00:22:02.128 "name": "nvme0", 00:22:02.128 "trtype": "tcp", 00:22:02.128 "traddr": "127.0.0.1", 00:22:02.128 "adrfam": "ipv4", 00:22:02.128 "trsvcid": "4420", 00:22:02.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:02.128 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:02.128 "prchk_reftag": false, 00:22:02.128 "prchk_guard": false, 00:22:02.128 "hdgst": false, 00:22:02.128 "ddgst": false, 00:22:02.128 "psk": ":spdk-test:key1", 00:22:02.128 "method": "bdev_nvme_attach_controller", 00:22:02.128 "req_id": 1 00:22:02.128 } 00:22:02.128 Got JSON-RPC error response 00:22:02.128 response: 00:22:02.128 { 00:22:02.128 "code": -5, 00:22:02.128 "message": "Input/output error" 00:22:02.128 } 00:22:02.128 12:46:28 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:22:02.128 12:46:28 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:02.128 12:46:28 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:02.128 12:46:28 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:02.128 12:46:28 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:22:02.128 12:46:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:02.128 12:46:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:22:02.128 12:46:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:22:02.128 12:46:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:22:02.128 12:46:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:22:02.128 12:46:28 keyring_linux -- keyring/linux.sh@33 -- # sn=955982341 00:22:02.128 12:46:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 955982341 00:22:02.128 1 links removed 00:22:02.128 12:46:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:22:02.128 12:46:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:22:02.128 12:46:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:22:02.128 12:46:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:22:02.128 12:46:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:22:02.128 12:46:28 keyring_linux -- keyring/linux.sh@33 -- # sn=786995002 00:22:02.128 12:46:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 786995002 00:22:02.128 1 links removed 00:22:02.128 12:46:28 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85908 00:22:02.128 12:46:28 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85908 ']' 00:22:02.128 12:46:28 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85908 00:22:02.128 12:46:28 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:22:02.128 12:46:28 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.128 12:46:28 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85908 00:22:02.128 killing process with pid 85908 00:22:02.128 Received shutdown signal, test time was about 1.000000 seconds 00:22:02.128 00:22:02.128 Latency(us) 00:22:02.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.128 =================================================================================================================== 00:22:02.128 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.128 12:46:28 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:02.128 12:46:28 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:02.128 12:46:28 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85908' 00:22:02.128 12:46:28 keyring_linux -- common/autotest_common.sh@967 -- # kill 85908 00:22:02.128 12:46:28 keyring_linux -- common/autotest_common.sh@972 -- # wait 85908 00:22:02.386 12:46:28 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85890 00:22:02.386 12:46:28 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 85890 ']' 00:22:02.386 12:46:28 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 85890 00:22:02.386 12:46:28 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:22:02.386 12:46:28 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:02.386 12:46:28 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 85890 00:22:02.386 killing process with pid 85890 00:22:02.386 12:46:28 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:02.386 12:46:28 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:02.386 12:46:28 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 85890' 00:22:02.386 12:46:28 keyring_linux -- common/autotest_common.sh@967 -- # kill 85890 00:22:02.386 12:46:28 keyring_linux -- common/autotest_common.sh@972 -- # wait 85890 00:22:02.952 00:22:02.952 real 0m6.349s 00:22:02.952 user 0m12.130s 00:22:02.952 sys 0m1.627s 00:22:02.952 12:46:28 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:02.952 12:46:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:22:02.952 ************************************ 00:22:02.952 END TEST keyring_linux 00:22:02.952 ************************************ 00:22:02.952 12:46:28 -- common/autotest_common.sh@1142 -- # return 0 00:22:02.952 12:46:28 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:22:02.952 12:46:28 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:22:02.952 12:46:28 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:22:02.952 12:46:28 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:22:02.952 12:46:28 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:22:02.952 12:46:28 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:22:02.952 12:46:28 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:22:02.952 12:46:28 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:22:02.952 12:46:28 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:22:02.952 12:46:28 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:22:02.952 12:46:28 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:22:02.952 12:46:28 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:22:02.952 12:46:28 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:22:02.952 12:46:28 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:22:02.952 12:46:28 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:22:02.953 12:46:28 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:22:02.953 12:46:28 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:22:02.953 12:46:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:02.953 12:46:28 -- common/autotest_common.sh@10 -- # set +x 00:22:02.953 12:46:28 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:22:02.953 12:46:28 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:02.953 12:46:28 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:02.953 12:46:28 -- common/autotest_common.sh@10 -- # set +x 00:22:04.852 INFO: APP EXITING 00:22:04.853 INFO: killing all VMs 00:22:04.853 INFO: killing vhost app 00:22:04.853 INFO: EXIT DONE 00:22:05.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:05.110 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:22:05.367 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:22:05.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:05.931 Cleaning 00:22:05.931 Removing: /var/run/dpdk/spdk0/config 00:22:05.931 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:05.932 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:05.932 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:05.932 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:05.932 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:05.932 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:05.932 Removing: /var/run/dpdk/spdk1/config 00:22:05.932 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:05.932 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:05.932 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:05.932 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:05.932 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:05.932 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:05.932 Removing: /var/run/dpdk/spdk2/config 00:22:05.932 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:05.932 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:05.932 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:05.932 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:05.932 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:05.932 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:05.932 Removing: /var/run/dpdk/spdk3/config 00:22:05.932 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:05.932 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:05.932 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:05.932 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:05.932 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:05.932 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:05.932 Removing: /var/run/dpdk/spdk4/config 00:22:05.932 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:05.932 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:05.932 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:05.932 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:05.932 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:05.932 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:05.932 Removing: /dev/shm/nvmf_trace.0 00:22:05.932 Removing: /dev/shm/spdk_tgt_trace.pid58866 00:22:05.932 Removing: /var/run/dpdk/spdk0 00:22:05.932 Removing: /var/run/dpdk/spdk1 00:22:05.932 Removing: /var/run/dpdk/spdk2 00:22:05.932 Removing: /var/run/dpdk/spdk3 00:22:06.189 Removing: /var/run/dpdk/spdk4 00:22:06.189 Removing: /var/run/dpdk/spdk_pid58710 00:22:06.189 Removing: /var/run/dpdk/spdk_pid58866 00:22:06.189 Removing: /var/run/dpdk/spdk_pid59064 00:22:06.189 Removing: /var/run/dpdk/spdk_pid59156 00:22:06.190 Removing: /var/run/dpdk/spdk_pid59189 00:22:06.190 Removing: /var/run/dpdk/spdk_pid59293 00:22:06.190 Removing: /var/run/dpdk/spdk_pid59311 00:22:06.190 Removing: /var/run/dpdk/spdk_pid59440 00:22:06.190 Removing: /var/run/dpdk/spdk_pid59625 00:22:06.190 Removing: /var/run/dpdk/spdk_pid59771 00:22:06.190 Removing: /var/run/dpdk/spdk_pid59842 00:22:06.190 Removing: /var/run/dpdk/spdk_pid59923 00:22:06.190 Removing: /var/run/dpdk/spdk_pid60015 00:22:06.190 Removing: /var/run/dpdk/spdk_pid60091 00:22:06.190 Removing: /var/run/dpdk/spdk_pid60125 00:22:06.190 Removing: /var/run/dpdk/spdk_pid60160 00:22:06.190 Removing: /var/run/dpdk/spdk_pid60222 00:22:06.190 Removing: /var/run/dpdk/spdk_pid60306 00:22:06.190 Removing: /var/run/dpdk/spdk_pid60743 00:22:06.190 Removing: /var/run/dpdk/spdk_pid60795 00:22:06.190 Removing: /var/run/dpdk/spdk_pid60846 00:22:06.190 Removing: /var/run/dpdk/spdk_pid60862 00:22:06.190 Removing: /var/run/dpdk/spdk_pid60935 00:22:06.190 Removing: /var/run/dpdk/spdk_pid60951 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61023 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61039 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61085 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61103 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61148 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61165 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61289 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61324 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61399 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61456 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61475 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61539 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61579 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61608 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61649 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61678 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61718 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61747 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61787 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61822 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61858 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61898 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61927 00:22:06.190 Removing: /var/run/dpdk/spdk_pid61968 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62003 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62037 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62072 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62106 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62146 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62189 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62222 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62259 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62329 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62422 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62731 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62743 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62774 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62793 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62809 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62833 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62847 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62862 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62887 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62900 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62921 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62940 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62959 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62975 00:22:06.190 Removing: /var/run/dpdk/spdk_pid62994 00:22:06.190 Removing: /var/run/dpdk/spdk_pid63013 00:22:06.190 Removing: /var/run/dpdk/spdk_pid63028 00:22:06.190 Removing: /var/run/dpdk/spdk_pid63047 00:22:06.190 Removing: /var/run/dpdk/spdk_pid63066 00:22:06.190 Removing: /var/run/dpdk/spdk_pid63082 00:22:06.190 Removing: /var/run/dpdk/spdk_pid63118 00:22:06.190 Removing: /var/run/dpdk/spdk_pid63131 00:22:06.190 Removing: /var/run/dpdk/spdk_pid63161 00:22:06.190 Removing: /var/run/dpdk/spdk_pid63225 00:22:06.190 Removing: /var/run/dpdk/spdk_pid63259 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63263 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63297 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63312 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63314 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63362 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63370 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63404 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63419 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63423 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63438 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63448 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63457 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63472 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63476 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63510 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63542 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63546 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63580 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63590 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63597 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63643 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63655 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63681 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63694 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63702 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63709 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63717 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63730 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63737 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63745 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63819 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63872 00:22:06.448 Removing: /var/run/dpdk/spdk_pid63982 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64020 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64061 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64075 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64097 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64117 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64154 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64170 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64240 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64261 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64305 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64377 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64443 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64472 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64558 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64607 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64645 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64864 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64961 00:22:06.448 Removing: /var/run/dpdk/spdk_pid64984 00:22:06.448 Removing: /var/run/dpdk/spdk_pid65299 00:22:06.448 Removing: /var/run/dpdk/spdk_pid65337 00:22:06.448 Removing: /var/run/dpdk/spdk_pid65622 00:22:06.448 Removing: /var/run/dpdk/spdk_pid66038 00:22:06.448 Removing: /var/run/dpdk/spdk_pid66311 00:22:06.448 Removing: /var/run/dpdk/spdk_pid67100 00:22:06.448 Removing: /var/run/dpdk/spdk_pid67921 00:22:06.448 Removing: /var/run/dpdk/spdk_pid68037 00:22:06.448 Removing: /var/run/dpdk/spdk_pid68105 00:22:06.448 Removing: /var/run/dpdk/spdk_pid69366 00:22:06.448 Removing: /var/run/dpdk/spdk_pid69572 00:22:06.448 Removing: /var/run/dpdk/spdk_pid72943 00:22:06.448 Removing: /var/run/dpdk/spdk_pid73246 00:22:06.448 Removing: /var/run/dpdk/spdk_pid73354 00:22:06.448 Removing: /var/run/dpdk/spdk_pid73488 00:22:06.448 Removing: /var/run/dpdk/spdk_pid73510 00:22:06.448 Removing: /var/run/dpdk/spdk_pid73543 00:22:06.448 Removing: /var/run/dpdk/spdk_pid73571 00:22:06.448 Removing: /var/run/dpdk/spdk_pid73663 00:22:06.448 Removing: /var/run/dpdk/spdk_pid73792 00:22:06.448 Removing: /var/run/dpdk/spdk_pid73953 00:22:06.448 Removing: /var/run/dpdk/spdk_pid74029 00:22:06.448 Removing: /var/run/dpdk/spdk_pid74217 00:22:06.448 Removing: /var/run/dpdk/spdk_pid74306 00:22:06.448 Removing: /var/run/dpdk/spdk_pid74393 00:22:06.448 Removing: /var/run/dpdk/spdk_pid74706 00:22:06.448 Removing: /var/run/dpdk/spdk_pid75093 00:22:06.448 Removing: /var/run/dpdk/spdk_pid75095 00:22:06.448 Removing: /var/run/dpdk/spdk_pid75373 00:22:06.448 Removing: /var/run/dpdk/spdk_pid75387 00:22:06.448 Removing: /var/run/dpdk/spdk_pid75405 00:22:06.448 Removing: /var/run/dpdk/spdk_pid75440 00:22:06.448 Removing: /var/run/dpdk/spdk_pid75445 00:22:06.448 Removing: /var/run/dpdk/spdk_pid75747 00:22:06.706 Removing: /var/run/dpdk/spdk_pid75791 00:22:06.706 Removing: /var/run/dpdk/spdk_pid76069 00:22:06.706 Removing: /var/run/dpdk/spdk_pid76271 00:22:06.706 Removing: /var/run/dpdk/spdk_pid76648 00:22:06.706 Removing: /var/run/dpdk/spdk_pid77157 00:22:06.706 Removing: /var/run/dpdk/spdk_pid77974 00:22:06.706 Removing: /var/run/dpdk/spdk_pid78555 00:22:06.706 Removing: /var/run/dpdk/spdk_pid78557 00:22:06.706 Removing: /var/run/dpdk/spdk_pid80466 00:22:06.706 Removing: /var/run/dpdk/spdk_pid80532 00:22:06.706 Removing: /var/run/dpdk/spdk_pid80587 00:22:06.706 Removing: /var/run/dpdk/spdk_pid80647 00:22:06.706 Removing: /var/run/dpdk/spdk_pid80768 00:22:06.706 Removing: /var/run/dpdk/spdk_pid80823 00:22:06.706 Removing: /var/run/dpdk/spdk_pid80883 00:22:06.706 Removing: /var/run/dpdk/spdk_pid80944 00:22:06.706 Removing: /var/run/dpdk/spdk_pid81259 00:22:06.706 Removing: /var/run/dpdk/spdk_pid82418 00:22:06.706 Removing: /var/run/dpdk/spdk_pid82564 00:22:06.706 Removing: /var/run/dpdk/spdk_pid82801 00:22:06.707 Removing: /var/run/dpdk/spdk_pid83359 00:22:06.707 Removing: /var/run/dpdk/spdk_pid83518 00:22:06.707 Removing: /var/run/dpdk/spdk_pid83675 00:22:06.707 Removing: /var/run/dpdk/spdk_pid83771 00:22:06.707 Removing: /var/run/dpdk/spdk_pid83934 00:22:06.707 Removing: /var/run/dpdk/spdk_pid84043 00:22:06.707 Removing: /var/run/dpdk/spdk_pid84696 00:22:06.707 Removing: /var/run/dpdk/spdk_pid84738 00:22:06.707 Removing: /var/run/dpdk/spdk_pid84768 00:22:06.707 Removing: /var/run/dpdk/spdk_pid85023 00:22:06.707 Removing: /var/run/dpdk/spdk_pid85059 00:22:06.707 Removing: /var/run/dpdk/spdk_pid85089 00:22:06.707 Removing: /var/run/dpdk/spdk_pid85515 00:22:06.707 Removing: /var/run/dpdk/spdk_pid85528 00:22:06.707 Removing: /var/run/dpdk/spdk_pid85776 00:22:06.707 Removing: /var/run/dpdk/spdk_pid85890 00:22:06.707 Removing: /var/run/dpdk/spdk_pid85908 00:22:06.707 Clean 00:22:06.707 12:46:32 -- common/autotest_common.sh@1451 -- # return 0 00:22:06.707 12:46:32 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:22:06.707 12:46:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.707 12:46:32 -- common/autotest_common.sh@10 -- # set +x 00:22:06.707 12:46:32 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:22:06.707 12:46:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.707 12:46:32 -- common/autotest_common.sh@10 -- # set +x 00:22:06.707 12:46:32 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:06.707 12:46:32 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:06.707 12:46:32 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:06.707 12:46:32 -- spdk/autotest.sh@391 -- # hash lcov 00:22:06.707 12:46:32 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:22:06.707 12:46:32 -- spdk/autotest.sh@393 -- # hostname 00:22:06.707 12:46:32 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:06.965 geninfo: WARNING: invalid characters removed from testname! 00:22:33.546 12:46:56 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:33.546 12:46:59 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:36.090 12:47:02 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:38.615 12:47:04 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:41.142 12:47:07 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:43.671 12:47:09 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:46.199 12:47:12 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:46.199 12:47:12 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:46.199 12:47:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:46.199 12:47:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.199 12:47:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.199 12:47:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.199 12:47:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.199 12:47:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.199 12:47:12 -- paths/export.sh@5 -- $ export PATH 00:22:46.199 12:47:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.199 12:47:12 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:46.199 12:47:12 -- common/autobuild_common.sh@444 -- $ date +%s 00:22:46.199 12:47:12 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720788432.XXXXXX 00:22:46.200 12:47:12 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720788432.HC8ZsG 00:22:46.200 12:47:12 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:22:46.200 12:47:12 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:22:46.200 12:47:12 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:22:46.200 12:47:12 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:46.200 12:47:12 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:46.200 12:47:12 -- common/autobuild_common.sh@460 -- $ get_config_params 00:22:46.200 12:47:12 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:22:46.200 12:47:12 -- common/autotest_common.sh@10 -- $ set +x 00:22:46.458 12:47:12 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:22:46.459 12:47:12 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:22:46.459 12:47:12 -- pm/common@17 -- $ local monitor 00:22:46.459 12:47:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:46.459 12:47:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:46.459 12:47:12 -- pm/common@25 -- $ sleep 1 00:22:46.459 12:47:12 -- pm/common@21 -- $ date +%s 00:22:46.459 12:47:12 -- pm/common@21 -- $ date +%s 00:22:46.459 12:47:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720788432 00:22:46.459 12:47:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720788432 00:22:46.459 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720788432_collect-cpu-load.pm.log 00:22:46.459 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720788432_collect-vmstat.pm.log 00:22:47.394 12:47:13 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:22:47.394 12:47:13 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:47.394 12:47:13 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:47.394 12:47:13 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:47.394 12:47:13 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:47.394 12:47:13 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:47.394 12:47:13 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:47.394 12:47:13 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:47.394 12:47:13 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:47.394 12:47:13 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:47.394 12:47:13 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:47.394 12:47:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:47.394 12:47:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:47.394 12:47:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:47.394 12:47:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:47.394 12:47:13 -- pm/common@44 -- $ pid=87635 00:22:47.394 12:47:13 -- pm/common@50 -- $ kill -TERM 87635 00:22:47.394 12:47:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:47.394 12:47:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:47.394 12:47:13 -- pm/common@44 -- $ pid=87637 00:22:47.394 12:47:13 -- pm/common@50 -- $ kill -TERM 87637 00:22:47.394 + [[ -n 5277 ]] 00:22:47.394 + sudo kill 5277 00:22:47.404 [Pipeline] } 00:22:47.424 [Pipeline] // timeout 00:22:47.430 [Pipeline] } 00:22:47.448 [Pipeline] // stage 00:22:47.454 [Pipeline] } 00:22:47.474 [Pipeline] // catchError 00:22:47.484 [Pipeline] stage 00:22:47.486 [Pipeline] { (Stop VM) 00:22:47.502 [Pipeline] sh 00:22:47.783 + vagrant halt 00:22:51.067 ==> default: Halting domain... 00:22:57.641 [Pipeline] sh 00:22:57.918 + vagrant destroy -f 00:23:01.197 ==> default: Removing domain... 00:23:01.208 [Pipeline] sh 00:23:01.564 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:23:01.573 [Pipeline] } 00:23:01.591 [Pipeline] // stage 00:23:01.596 [Pipeline] } 00:23:01.614 [Pipeline] // dir 00:23:01.619 [Pipeline] } 00:23:01.637 [Pipeline] // wrap 00:23:01.644 [Pipeline] } 00:23:01.660 [Pipeline] // catchError 00:23:01.669 [Pipeline] stage 00:23:01.671 [Pipeline] { (Epilogue) 00:23:01.686 [Pipeline] sh 00:23:01.963 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:07.334 [Pipeline] catchError 00:23:07.335 [Pipeline] { 00:23:07.345 [Pipeline] sh 00:23:07.619 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:07.619 Artifacts sizes are good 00:23:07.628 [Pipeline] } 00:23:07.647 [Pipeline] // catchError 00:23:07.656 [Pipeline] archiveArtifacts 00:23:07.663 Archiving artifacts 00:23:07.884 [Pipeline] cleanWs 00:23:07.896 [WS-CLEANUP] Deleting project workspace... 00:23:07.896 [WS-CLEANUP] Deferred wipeout is used... 00:23:07.902 [WS-CLEANUP] done 00:23:07.904 [Pipeline] } 00:23:07.925 [Pipeline] // stage 00:23:07.930 [Pipeline] } 00:23:07.948 [Pipeline] // node 00:23:07.955 [Pipeline] End of Pipeline 00:23:07.987 Finished: SUCCESS